2026-03-31 01:37:14.044888 | Job console starting 2026-03-31 01:37:14.055829 | Updating git repos 2026-03-31 01:37:14.654603 | Cloning repos into workspace 2026-03-31 01:37:14.894619 | Restoring repo states 2026-03-31 01:37:14.919153 | Merging changes 2026-03-31 01:37:14.919183 | Checking out repos 2026-03-31 01:37:15.188828 | Preparing playbooks 2026-03-31 01:37:15.960003 | Running Ansible setup 2026-03-31 01:37:20.449099 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-03-31 01:37:21.235415 | 2026-03-31 01:37:21.235594 | PLAY [Base pre] 2026-03-31 01:37:21.253580 | 2026-03-31 01:37:21.253741 | TASK [Setup log path fact] 2026-03-31 01:37:21.284375 | orchestrator | ok 2026-03-31 01:37:21.302461 | 2026-03-31 01:37:21.302636 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-31 01:37:21.351843 | orchestrator | ok 2026-03-31 01:37:21.369541 | 2026-03-31 01:37:21.369698 | TASK [emit-job-header : Print job information] 2026-03-31 01:37:21.433196 | # Job Information 2026-03-31 01:37:21.433667 | Ansible Version: 2.16.14 2026-03-31 01:37:21.433728 | Job: testbed-upgrade-stable-ubuntu-24.04 2026-03-31 01:37:21.433803 | Pipeline: periodic-midnight 2026-03-31 01:37:21.433843 | Executor: 521e9411259a 2026-03-31 01:37:21.433877 | Triggered by: https://github.com/osism/testbed 2026-03-31 01:37:21.433913 | Event ID: f1387a17038c4d928703f7a15488374b 2026-03-31 01:37:21.445316 | 2026-03-31 01:37:21.445477 | LOOP [emit-job-header : Print node information] 2026-03-31 01:37:21.570084 | orchestrator | ok: 2026-03-31 01:37:21.570333 | orchestrator | # Node Information 2026-03-31 01:37:21.570368 | orchestrator | Inventory Hostname: orchestrator 2026-03-31 01:37:21.570394 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-03-31 01:37:21.570416 | orchestrator | Username: zuul-testbed03 2026-03-31 01:37:21.570437 | orchestrator | Distro: Debian 12.13 2026-03-31 01:37:21.570461 | orchestrator | Provider: static-testbed 2026-03-31 01:37:21.570482 | orchestrator | Region: 2026-03-31 01:37:21.570503 | orchestrator | Label: testbed-orchestrator 2026-03-31 01:37:21.570522 | orchestrator | Product Name: OpenStack Nova 2026-03-31 01:37:21.570541 | orchestrator | Interface IP: 81.163.193.140 2026-03-31 01:37:21.590974 | 2026-03-31 01:37:21.591128 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-03-31 01:37:22.088401 | orchestrator -> localhost | changed 2026-03-31 01:37:22.105315 | 2026-03-31 01:37:22.105504 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-03-31 01:37:23.190467 | orchestrator -> localhost | changed 2026-03-31 01:37:23.215931 | 2026-03-31 01:37:23.216090 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-03-31 01:37:23.505058 | orchestrator -> localhost | ok 2026-03-31 01:37:23.520863 | 2026-03-31 01:37:23.521053 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-03-31 01:37:23.547804 | orchestrator | ok 2026-03-31 01:37:23.567995 | orchestrator | included: /var/lib/zuul/builds/6dc27caeaea747b9b7722bbf633814ae/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-03-31 01:37:23.576673 | 2026-03-31 01:37:23.576807 | TASK [add-build-sshkey : Create Temp SSH key] 2026-03-31 01:37:25.657500 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-03-31 01:37:25.657726 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/6dc27caeaea747b9b7722bbf633814ae/work/6dc27caeaea747b9b7722bbf633814ae_id_rsa 2026-03-31 01:37:25.657769 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/6dc27caeaea747b9b7722bbf633814ae/work/6dc27caeaea747b9b7722bbf633814ae_id_rsa.pub 2026-03-31 01:37:25.657827 | orchestrator -> localhost | The key fingerprint is: 2026-03-31 01:37:25.657855 | orchestrator -> localhost | SHA256:hLKMEG04f5qU1Di8VlgZjwB/43nKyTATMBN1rgQcOoQ zuul-build-sshkey 2026-03-31 01:37:25.657879 | orchestrator -> localhost | The key's randomart image is: 2026-03-31 01:37:25.657915 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-03-31 01:37:25.657938 | orchestrator -> localhost | |@X+=+o | 2026-03-31 01:37:25.657960 | orchestrator -> localhost | |EOO++o . | 2026-03-31 01:37:25.657980 | orchestrator -> localhost | |+=+=* o . | 2026-03-31 01:37:25.658000 | orchestrator -> localhost | | +*B.= . | 2026-03-31 01:37:25.658020 | orchestrator -> localhost | | o*+= . S | 2026-03-31 01:37:25.658045 | orchestrator -> localhost | | o* + | 2026-03-31 01:37:25.658066 | orchestrator -> localhost | | = | 2026-03-31 01:37:25.658086 | orchestrator -> localhost | | | 2026-03-31 01:37:25.658106 | orchestrator -> localhost | | | 2026-03-31 01:37:25.658126 | orchestrator -> localhost | +----[SHA256]-----+ 2026-03-31 01:37:25.658181 | orchestrator -> localhost | ok: Runtime: 0:00:01.572518 2026-03-31 01:37:25.666189 | 2026-03-31 01:37:25.666299 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-03-31 01:37:25.695269 | orchestrator | ok 2026-03-31 01:37:25.705452 | orchestrator | included: /var/lib/zuul/builds/6dc27caeaea747b9b7722bbf633814ae/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-03-31 01:37:25.715008 | 2026-03-31 01:37:25.715118 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-03-31 01:37:25.740058 | orchestrator | skipping: Conditional result was False 2026-03-31 01:37:25.748722 | 2026-03-31 01:37:25.748854 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-03-31 01:37:26.399809 | orchestrator | changed 2026-03-31 01:37:26.410230 | 2026-03-31 01:37:26.410398 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-03-31 01:37:26.722374 | orchestrator | ok 2026-03-31 01:37:26.730824 | 2026-03-31 01:37:26.730984 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-03-31 01:37:27.207233 | orchestrator | ok 2026-03-31 01:37:27.229985 | 2026-03-31 01:37:27.230372 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-03-31 01:37:27.699151 | orchestrator | ok 2026-03-31 01:37:27.708487 | 2026-03-31 01:37:27.708633 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-03-31 01:37:27.743486 | orchestrator | skipping: Conditional result was False 2026-03-31 01:37:27.757314 | 2026-03-31 01:37:27.757483 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-03-31 01:37:28.238468 | orchestrator -> localhost | changed 2026-03-31 01:37:28.252942 | 2026-03-31 01:37:28.253128 | TASK [add-build-sshkey : Add back temp key] 2026-03-31 01:37:28.614018 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/6dc27caeaea747b9b7722bbf633814ae/work/6dc27caeaea747b9b7722bbf633814ae_id_rsa (zuul-build-sshkey) 2026-03-31 01:37:28.614290 | orchestrator -> localhost | ok: Runtime: 0:00:00.018186 2026-03-31 01:37:28.622024 | 2026-03-31 01:37:28.622137 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-03-31 01:37:29.077845 | orchestrator | ok 2026-03-31 01:37:29.087771 | 2026-03-31 01:37:29.088011 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-03-31 01:37:29.122917 | orchestrator | skipping: Conditional result was False 2026-03-31 01:37:29.180394 | 2026-03-31 01:37:29.180535 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-03-31 01:37:29.617500 | orchestrator | ok 2026-03-31 01:37:29.633262 | 2026-03-31 01:37:29.633413 | TASK [validate-host : Define zuul_info_dir fact] 2026-03-31 01:37:29.682352 | orchestrator | ok 2026-03-31 01:37:29.694669 | 2026-03-31 01:37:29.694861 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-03-31 01:37:30.036196 | orchestrator -> localhost | ok 2026-03-31 01:37:30.052050 | 2026-03-31 01:37:30.052239 | TASK [validate-host : Collect information about the host] 2026-03-31 01:37:31.379422 | orchestrator | ok 2026-03-31 01:37:31.409071 | 2026-03-31 01:37:31.409315 | TASK [validate-host : Sanitize hostname] 2026-03-31 01:37:31.489057 | orchestrator | ok 2026-03-31 01:37:31.498358 | 2026-03-31 01:37:31.498537 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-03-31 01:37:32.074687 | orchestrator -> localhost | changed 2026-03-31 01:37:32.081579 | 2026-03-31 01:37:32.081695 | TASK [validate-host : Collect information about zuul worker] 2026-03-31 01:37:32.544710 | orchestrator | ok 2026-03-31 01:37:32.553428 | 2026-03-31 01:37:32.553581 | TASK [validate-host : Write out all zuul information for each host] 2026-03-31 01:37:33.146276 | orchestrator -> localhost | changed 2026-03-31 01:37:33.169093 | 2026-03-31 01:37:33.169282 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-03-31 01:37:33.510995 | orchestrator | ok 2026-03-31 01:37:33.521219 | 2026-03-31 01:37:33.521358 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-03-31 01:38:00.867783 | orchestrator | changed: 2026-03-31 01:38:00.868045 | orchestrator | .d..t...... src/ 2026-03-31 01:38:00.868084 | orchestrator | .d..t...... src/github.com/ 2026-03-31 01:38:00.868112 | orchestrator | .d..t...... src/github.com/osism/ 2026-03-31 01:38:00.868135 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-03-31 01:38:00.868158 | orchestrator | RedHat.yml 2026-03-31 01:38:00.882652 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-03-31 01:38:00.882669 | orchestrator | RedHat.yml 2026-03-31 01:38:00.882721 | orchestrator | = 2.2.0"... 2026-03-31 01:38:11.464176 | orchestrator | - Finding latest version of hashicorp/null... 2026-03-31 01:38:11.481961 | orchestrator | - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2026-03-31 01:38:11.980959 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-03-31 01:38:12.665244 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-03-31 01:38:12.730515 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-03-31 01:38:14.911110 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-03-31 01:38:15.319823 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-03-31 01:38:16.061217 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-03-31 01:38:16.061313 | orchestrator | 2026-03-31 01:38:16.062703 | orchestrator | Providers are signed by their developers. 2026-03-31 01:38:16.062738 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-03-31 01:38:16.062751 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-03-31 01:38:16.062784 | orchestrator | 2026-03-31 01:38:16.062799 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-03-31 01:38:16.062838 | orchestrator | selections it made above. Include this file in your version control repository 2026-03-31 01:38:16.062852 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-03-31 01:38:16.062861 | orchestrator | you run "tofu init" in the future. 2026-03-31 01:38:16.063060 | orchestrator | 2026-03-31 01:38:16.063081 | orchestrator | OpenTofu has been successfully initialized! 2026-03-31 01:38:16.063094 | orchestrator | 2026-03-31 01:38:16.063106 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-03-31 01:38:16.063119 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-03-31 01:38:16.063132 | orchestrator | should now work. 2026-03-31 01:38:16.063144 | orchestrator | 2026-03-31 01:38:16.063156 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-03-31 01:38:16.063167 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-03-31 01:38:16.063179 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-03-31 01:38:16.248216 | orchestrator | Created and switched to workspace "ci"! 2026-03-31 01:38:16.248284 | orchestrator | 2026-03-31 01:38:16.248292 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-03-31 01:38:16.248297 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-03-31 01:38:16.248303 | orchestrator | for this configuration. 2026-03-31 01:38:16.401893 | orchestrator | ci.auto.tfvars 2026-03-31 01:38:16.405455 | orchestrator | default_custom.tf 2026-03-31 01:38:17.316445 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-03-31 01:38:17.850763 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-03-31 01:38:18.083433 | orchestrator | 2026-03-31 01:38:18.083520 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-03-31 01:38:18.083531 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-03-31 01:38:18.083539 | orchestrator | + create 2026-03-31 01:38:18.083546 | orchestrator | <= read (data resources) 2026-03-31 01:38:18.083554 | orchestrator | 2026-03-31 01:38:18.083560 | orchestrator | OpenTofu will perform the following actions: 2026-03-31 01:38:18.083576 | orchestrator | 2026-03-31 01:38:18.083596 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-03-31 01:38:18.083603 | orchestrator | # (config refers to values not yet known) 2026-03-31 01:38:18.083610 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-03-31 01:38:18.083616 | orchestrator | + checksum = (known after apply) 2026-03-31 01:38:18.083623 | orchestrator | + created_at = (known after apply) 2026-03-31 01:38:18.083630 | orchestrator | + file = (known after apply) 2026-03-31 01:38:18.083636 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.083665 | orchestrator | + metadata = (known after apply) 2026-03-31 01:38:18.083670 | orchestrator | + min_disk_gb = (known after apply) 2026-03-31 01:38:18.083674 | orchestrator | + min_ram_mb = (known after apply) 2026-03-31 01:38:18.083678 | orchestrator | + most_recent = true 2026-03-31 01:38:18.083682 | orchestrator | + name = (known after apply) 2026-03-31 01:38:18.083686 | orchestrator | + protected = (known after apply) 2026-03-31 01:38:18.083690 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.083697 | orchestrator | + schema = (known after apply) 2026-03-31 01:38:18.083701 | orchestrator | + size_bytes = (known after apply) 2026-03-31 01:38:18.083705 | orchestrator | + tags = (known after apply) 2026-03-31 01:38:18.083709 | orchestrator | + updated_at = (known after apply) 2026-03-31 01:38:18.083713 | orchestrator | } 2026-03-31 01:38:18.083720 | orchestrator | 2026-03-31 01:38:18.083724 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-03-31 01:38:18.083730 | orchestrator | # (config refers to values not yet known) 2026-03-31 01:38:18.083736 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-03-31 01:38:18.083743 | orchestrator | + checksum = (known after apply) 2026-03-31 01:38:18.083749 | orchestrator | + created_at = (known after apply) 2026-03-31 01:38:18.083754 | orchestrator | + file = (known after apply) 2026-03-31 01:38:18.083759 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.083765 | orchestrator | + metadata = (known after apply) 2026-03-31 01:38:18.083770 | orchestrator | + min_disk_gb = (known after apply) 2026-03-31 01:38:18.083775 | orchestrator | + min_ram_mb = (known after apply) 2026-03-31 01:38:18.083781 | orchestrator | + most_recent = true 2026-03-31 01:38:18.083786 | orchestrator | + name = (known after apply) 2026-03-31 01:38:18.083792 | orchestrator | + protected = (known after apply) 2026-03-31 01:38:18.083798 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.083804 | orchestrator | + schema = (known after apply) 2026-03-31 01:38:18.083809 | orchestrator | + size_bytes = (known after apply) 2026-03-31 01:38:18.083815 | orchestrator | + tags = (known after apply) 2026-03-31 01:38:18.083821 | orchestrator | + updated_at = (known after apply) 2026-03-31 01:38:18.083827 | orchestrator | } 2026-03-31 01:38:18.083832 | orchestrator | 2026-03-31 01:38:18.083838 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-03-31 01:38:18.083844 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-03-31 01:38:18.083851 | orchestrator | + content = (known after apply) 2026-03-31 01:38:18.083857 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-31 01:38:18.083863 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-31 01:38:18.083869 | orchestrator | + content_md5 = (known after apply) 2026-03-31 01:38:18.083875 | orchestrator | + content_sha1 = (known after apply) 2026-03-31 01:38:18.083881 | orchestrator | + content_sha256 = (known after apply) 2026-03-31 01:38:18.083887 | orchestrator | + content_sha512 = (known after apply) 2026-03-31 01:38:18.083894 | orchestrator | + directory_permission = "0777" 2026-03-31 01:38:18.083901 | orchestrator | + file_permission = "0644" 2026-03-31 01:38:18.083907 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-03-31 01:38:18.083913 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.083919 | orchestrator | } 2026-03-31 01:38:18.083928 | orchestrator | 2026-03-31 01:38:18.083932 | orchestrator | # local_file.id_rsa_pub will be created 2026-03-31 01:38:18.083936 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-03-31 01:38:18.083940 | orchestrator | + content = (known after apply) 2026-03-31 01:38:18.083944 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-31 01:38:18.083948 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-31 01:38:18.083952 | orchestrator | + content_md5 = (known after apply) 2026-03-31 01:38:18.083955 | orchestrator | + content_sha1 = (known after apply) 2026-03-31 01:38:18.083959 | orchestrator | + content_sha256 = (known after apply) 2026-03-31 01:38:18.083971 | orchestrator | + content_sha512 = (known after apply) 2026-03-31 01:38:18.083975 | orchestrator | + directory_permission = "0777" 2026-03-31 01:38:18.083979 | orchestrator | + file_permission = "0644" 2026-03-31 01:38:18.083988 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-03-31 01:38:18.083992 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.083995 | orchestrator | } 2026-03-31 01:38:18.083999 | orchestrator | 2026-03-31 01:38:18.084003 | orchestrator | # local_file.inventory will be created 2026-03-31 01:38:18.084007 | orchestrator | + resource "local_file" "inventory" { 2026-03-31 01:38:18.084010 | orchestrator | + content = (known after apply) 2026-03-31 01:38:18.084014 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-31 01:38:18.084018 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-31 01:38:18.084022 | orchestrator | + content_md5 = (known after apply) 2026-03-31 01:38:18.084025 | orchestrator | + content_sha1 = (known after apply) 2026-03-31 01:38:18.084029 | orchestrator | + content_sha256 = (known after apply) 2026-03-31 01:38:18.084033 | orchestrator | + content_sha512 = (known after apply) 2026-03-31 01:38:18.084037 | orchestrator | + directory_permission = "0777" 2026-03-31 01:38:18.084041 | orchestrator | + file_permission = "0644" 2026-03-31 01:38:18.084046 | orchestrator | + filename = "inventory.ci" 2026-03-31 01:38:18.084053 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.084058 | orchestrator | } 2026-03-31 01:38:18.084066 | orchestrator | 2026-03-31 01:38:18.084072 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-03-31 01:38:18.084078 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-03-31 01:38:18.084083 | orchestrator | + content = (sensitive value) 2026-03-31 01:38:18.084089 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-31 01:38:18.084095 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-31 01:38:18.084102 | orchestrator | + content_md5 = (known after apply) 2026-03-31 01:38:18.084108 | orchestrator | + content_sha1 = (known after apply) 2026-03-31 01:38:18.084114 | orchestrator | + content_sha256 = (known after apply) 2026-03-31 01:38:18.084121 | orchestrator | + content_sha512 = (known after apply) 2026-03-31 01:38:18.084127 | orchestrator | + directory_permission = "0700" 2026-03-31 01:38:18.084133 | orchestrator | + file_permission = "0600" 2026-03-31 01:38:18.084139 | orchestrator | + filename = ".id_rsa.ci" 2026-03-31 01:38:18.084145 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.084151 | orchestrator | } 2026-03-31 01:38:18.084155 | orchestrator | 2026-03-31 01:38:18.084159 | orchestrator | # null_resource.node_semaphore will be created 2026-03-31 01:38:18.084162 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-03-31 01:38:18.084166 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.084170 | orchestrator | } 2026-03-31 01:38:18.084178 | orchestrator | 2026-03-31 01:38:18.084185 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-03-31 01:38:18.084191 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-03-31 01:38:18.084197 | orchestrator | + attachment = (known after apply) 2026-03-31 01:38:18.084213 | orchestrator | + availability_zone = "nova" 2026-03-31 01:38:18.084226 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.084233 | orchestrator | + image_id = (known after apply) 2026-03-31 01:38:18.084239 | orchestrator | + metadata = (known after apply) 2026-03-31 01:38:18.084246 | orchestrator | + name = "testbed-volume-manager-base" 2026-03-31 01:38:18.084253 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.084259 | orchestrator | + size = 80 2026-03-31 01:38:18.084266 | orchestrator | + volume_retype_policy = "never" 2026-03-31 01:38:18.084272 | orchestrator | + volume_type = "ssd" 2026-03-31 01:38:18.084279 | orchestrator | } 2026-03-31 01:38:18.084285 | orchestrator | 2026-03-31 01:38:18.084292 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-03-31 01:38:18.084299 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-31 01:38:18.084306 | orchestrator | + attachment = (known after apply) 2026-03-31 01:38:18.084313 | orchestrator | + availability_zone = "nova" 2026-03-31 01:38:18.084320 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.084333 | orchestrator | + image_id = (known after apply) 2026-03-31 01:38:18.084340 | orchestrator | + metadata = (known after apply) 2026-03-31 01:38:18.084347 | orchestrator | + name = "testbed-volume-0-node-base" 2026-03-31 01:38:18.084353 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.084360 | orchestrator | + size = 80 2026-03-31 01:38:18.084367 | orchestrator | + volume_retype_policy = "never" 2026-03-31 01:38:18.084373 | orchestrator | + volume_type = "ssd" 2026-03-31 01:38:18.084379 | orchestrator | } 2026-03-31 01:38:18.084387 | orchestrator | 2026-03-31 01:38:18.084393 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-03-31 01:38:18.084399 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-31 01:38:18.084405 | orchestrator | + attachment = (known after apply) 2026-03-31 01:38:18.084411 | orchestrator | + availability_zone = "nova" 2026-03-31 01:38:18.084417 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.084422 | orchestrator | + image_id = (known after apply) 2026-03-31 01:38:18.084428 | orchestrator | + metadata = (known after apply) 2026-03-31 01:38:18.084434 | orchestrator | + name = "testbed-volume-1-node-base" 2026-03-31 01:38:18.084440 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.084447 | orchestrator | + size = 80 2026-03-31 01:38:18.084454 | orchestrator | + volume_retype_policy = "never" 2026-03-31 01:38:18.084460 | orchestrator | + volume_type = "ssd" 2026-03-31 01:38:18.084466 | orchestrator | } 2026-03-31 01:38:18.084473 | orchestrator | 2026-03-31 01:38:18.084479 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-03-31 01:38:18.084486 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-31 01:38:18.084491 | orchestrator | + attachment = (known after apply) 2026-03-31 01:38:18.084497 | orchestrator | + availability_zone = "nova" 2026-03-31 01:38:18.084503 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.084510 | orchestrator | + image_id = (known after apply) 2026-03-31 01:38:18.084516 | orchestrator | + metadata = (known after apply) 2026-03-31 01:38:18.084522 | orchestrator | + name = "testbed-volume-2-node-base" 2026-03-31 01:38:18.084528 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.084534 | orchestrator | + size = 80 2026-03-31 01:38:18.084546 | orchestrator | + volume_retype_policy = "never" 2026-03-31 01:38:18.084552 | orchestrator | + volume_type = "ssd" 2026-03-31 01:38:18.084558 | orchestrator | } 2026-03-31 01:38:18.084568 | orchestrator | 2026-03-31 01:38:18.084574 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-03-31 01:38:18.084623 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-31 01:38:18.084631 | orchestrator | + attachment = (known after apply) 2026-03-31 01:38:18.084637 | orchestrator | + availability_zone = "nova" 2026-03-31 01:38:18.084644 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.084650 | orchestrator | + image_id = (known after apply) 2026-03-31 01:38:18.084657 | orchestrator | + metadata = (known after apply) 2026-03-31 01:38:18.084664 | orchestrator | + name = "testbed-volume-3-node-base" 2026-03-31 01:38:18.084670 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.084676 | orchestrator | + size = 80 2026-03-31 01:38:18.084682 | orchestrator | + volume_retype_policy = "never" 2026-03-31 01:38:18.084688 | orchestrator | + volume_type = "ssd" 2026-03-31 01:38:18.084694 | orchestrator | } 2026-03-31 01:38:18.084700 | orchestrator | 2026-03-31 01:38:18.084707 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-03-31 01:38:18.084714 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-31 01:38:18.084719 | orchestrator | + attachment = (known after apply) 2026-03-31 01:38:18.084726 | orchestrator | + availability_zone = "nova" 2026-03-31 01:38:18.084732 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.084746 | orchestrator | + image_id = (known after apply) 2026-03-31 01:38:18.084753 | orchestrator | + metadata = (known after apply) 2026-03-31 01:38:18.084760 | orchestrator | + name = "testbed-volume-4-node-base" 2026-03-31 01:38:18.084767 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.084774 | orchestrator | + size = 80 2026-03-31 01:38:18.084780 | orchestrator | + volume_retype_policy = "never" 2026-03-31 01:38:18.084787 | orchestrator | + volume_type = "ssd" 2026-03-31 01:38:18.084793 | orchestrator | } 2026-03-31 01:38:18.084804 | orchestrator | 2026-03-31 01:38:18.084810 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-03-31 01:38:18.084817 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-31 01:38:18.084823 | orchestrator | + attachment = (known after apply) 2026-03-31 01:38:18.084829 | orchestrator | + availability_zone = "nova" 2026-03-31 01:38:18.084834 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.084841 | orchestrator | + image_id = (known after apply) 2026-03-31 01:38:18.084847 | orchestrator | + metadata = (known after apply) 2026-03-31 01:38:18.084854 | orchestrator | + name = "testbed-volume-5-node-base" 2026-03-31 01:38:18.084860 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.084865 | orchestrator | + size = 80 2026-03-31 01:38:18.084871 | orchestrator | + volume_retype_policy = "never" 2026-03-31 01:38:18.084877 | orchestrator | + volume_type = "ssd" 2026-03-31 01:38:18.084883 | orchestrator | } 2026-03-31 01:38:18.084890 | orchestrator | 2026-03-31 01:38:18.084896 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-03-31 01:38:18.084904 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-31 01:38:18.084910 | orchestrator | + attachment = (known after apply) 2026-03-31 01:38:18.084916 | orchestrator | + availability_zone = "nova" 2026-03-31 01:38:18.084922 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.084928 | orchestrator | + metadata = (known after apply) 2026-03-31 01:38:18.084935 | orchestrator | + name = "testbed-volume-0-node-3" 2026-03-31 01:38:18.084941 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.084947 | orchestrator | + size = 20 2026-03-31 01:38:18.084953 | orchestrator | + volume_retype_policy = "never" 2026-03-31 01:38:18.084957 | orchestrator | + volume_type = "ssd" 2026-03-31 01:38:18.084961 | orchestrator | } 2026-03-31 01:38:18.084965 | orchestrator | 2026-03-31 01:38:18.084969 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-03-31 01:38:18.084972 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-31 01:38:18.084976 | orchestrator | + attachment = (known after apply) 2026-03-31 01:38:18.084980 | orchestrator | + availability_zone = "nova" 2026-03-31 01:38:18.084983 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.084987 | orchestrator | + metadata = (known after apply) 2026-03-31 01:38:18.084991 | orchestrator | + name = "testbed-volume-1-node-4" 2026-03-31 01:38:18.084995 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.084998 | orchestrator | + size = 20 2026-03-31 01:38:18.085002 | orchestrator | + volume_retype_policy = "never" 2026-03-31 01:38:18.085006 | orchestrator | + volume_type = "ssd" 2026-03-31 01:38:18.085010 | orchestrator | } 2026-03-31 01:38:18.085016 | orchestrator | 2026-03-31 01:38:18.085020 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-03-31 01:38:18.085024 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-31 01:38:18.085027 | orchestrator | + attachment = (known after apply) 2026-03-31 01:38:18.085031 | orchestrator | + availability_zone = "nova" 2026-03-31 01:38:18.085035 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.085039 | orchestrator | + metadata = (known after apply) 2026-03-31 01:38:18.085042 | orchestrator | + name = "testbed-volume-2-node-5" 2026-03-31 01:38:18.085046 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.085055 | orchestrator | + size = 20 2026-03-31 01:38:18.085059 | orchestrator | + volume_retype_policy = "never" 2026-03-31 01:38:18.085063 | orchestrator | + volume_type = "ssd" 2026-03-31 01:38:18.085067 | orchestrator | } 2026-03-31 01:38:18.085071 | orchestrator | 2026-03-31 01:38:18.085074 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-03-31 01:38:18.085078 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-31 01:38:18.085082 | orchestrator | + attachment = (known after apply) 2026-03-31 01:38:18.085085 | orchestrator | + availability_zone = "nova" 2026-03-31 01:38:18.085089 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.085099 | orchestrator | + metadata = (known after apply) 2026-03-31 01:38:18.085103 | orchestrator | + name = "testbed-volume-3-node-3" 2026-03-31 01:38:18.085107 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.085110 | orchestrator | + size = 20 2026-03-31 01:38:18.085114 | orchestrator | + volume_retype_policy = "never" 2026-03-31 01:38:18.085118 | orchestrator | + volume_type = "ssd" 2026-03-31 01:38:18.085121 | orchestrator | } 2026-03-31 01:38:18.085125 | orchestrator | 2026-03-31 01:38:18.085129 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-03-31 01:38:18.085133 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-31 01:38:18.085136 | orchestrator | + attachment = (known after apply) 2026-03-31 01:38:18.085140 | orchestrator | + availability_zone = "nova" 2026-03-31 01:38:18.085144 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.085148 | orchestrator | + metadata = (known after apply) 2026-03-31 01:38:18.085151 | orchestrator | + name = "testbed-volume-4-node-4" 2026-03-31 01:38:18.085155 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.085159 | orchestrator | + size = 20 2026-03-31 01:38:18.085163 | orchestrator | + volume_retype_policy = "never" 2026-03-31 01:38:18.085166 | orchestrator | + volume_type = "ssd" 2026-03-31 01:38:18.085170 | orchestrator | } 2026-03-31 01:38:18.085174 | orchestrator | 2026-03-31 01:38:18.085178 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-03-31 01:38:18.085181 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-31 01:38:18.085185 | orchestrator | + attachment = (known after apply) 2026-03-31 01:38:18.085189 | orchestrator | + availability_zone = "nova" 2026-03-31 01:38:18.085193 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.085196 | orchestrator | + metadata = (known after apply) 2026-03-31 01:38:18.085200 | orchestrator | + name = "testbed-volume-5-node-5" 2026-03-31 01:38:18.085204 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.085207 | orchestrator | + size = 20 2026-03-31 01:38:18.085211 | orchestrator | + volume_retype_policy = "never" 2026-03-31 01:38:18.085215 | orchestrator | + volume_type = "ssd" 2026-03-31 01:38:18.085219 | orchestrator | } 2026-03-31 01:38:18.085224 | orchestrator | 2026-03-31 01:38:18.085228 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-03-31 01:38:18.085232 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-31 01:38:18.085244 | orchestrator | + attachment = (known after apply) 2026-03-31 01:38:18.085248 | orchestrator | + availability_zone = "nova" 2026-03-31 01:38:18.085252 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.085256 | orchestrator | + metadata = (known after apply) 2026-03-31 01:38:18.085260 | orchestrator | + name = "testbed-volume-6-node-3" 2026-03-31 01:38:18.085263 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.085267 | orchestrator | + size = 20 2026-03-31 01:38:18.085271 | orchestrator | + volume_retype_policy = "never" 2026-03-31 01:38:18.085275 | orchestrator | + volume_type = "ssd" 2026-03-31 01:38:18.085278 | orchestrator | } 2026-03-31 01:38:18.085282 | orchestrator | 2026-03-31 01:38:18.085286 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-03-31 01:38:18.085290 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-31 01:38:18.085297 | orchestrator | + attachment = (known after apply) 2026-03-31 01:38:18.085301 | orchestrator | + availability_zone = "nova" 2026-03-31 01:38:18.085305 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.085308 | orchestrator | + metadata = (known after apply) 2026-03-31 01:38:18.085312 | orchestrator | + name = "testbed-volume-7-node-4" 2026-03-31 01:38:18.085316 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.085320 | orchestrator | + size = 20 2026-03-31 01:38:18.085324 | orchestrator | + volume_retype_policy = "never" 2026-03-31 01:38:18.085327 | orchestrator | + volume_type = "ssd" 2026-03-31 01:38:18.085331 | orchestrator | } 2026-03-31 01:38:18.085335 | orchestrator | 2026-03-31 01:38:18.085339 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-03-31 01:38:18.085342 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-31 01:38:18.085347 | orchestrator | + attachment = (known after apply) 2026-03-31 01:38:18.085352 | orchestrator | + availability_zone = "nova" 2026-03-31 01:38:18.085358 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.085364 | orchestrator | + metadata = (known after apply) 2026-03-31 01:38:18.085370 | orchestrator | + name = "testbed-volume-8-node-5" 2026-03-31 01:38:18.085379 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.085386 | orchestrator | + size = 20 2026-03-31 01:38:18.085393 | orchestrator | + volume_retype_policy = "never" 2026-03-31 01:38:18.085399 | orchestrator | + volume_type = "ssd" 2026-03-31 01:38:18.085404 | orchestrator | } 2026-03-31 01:38:18.085681 | orchestrator | 2026-03-31 01:38:18.085747 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-03-31 01:38:18.085755 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-03-31 01:38:18.085760 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-31 01:38:18.085764 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-31 01:38:18.085768 | orchestrator | + all_metadata = (known after apply) 2026-03-31 01:38:18.085772 | orchestrator | + all_tags = (known after apply) 2026-03-31 01:38:18.085777 | orchestrator | + availability_zone = "nova" 2026-03-31 01:38:18.085781 | orchestrator | + config_drive = true 2026-03-31 01:38:18.085794 | orchestrator | + created = (known after apply) 2026-03-31 01:38:18.085800 | orchestrator | + flavor_id = (known after apply) 2026-03-31 01:38:18.085807 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-03-31 01:38:18.085814 | orchestrator | + force_delete = false 2026-03-31 01:38:18.085820 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-31 01:38:18.085827 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.085833 | orchestrator | + image_id = (known after apply) 2026-03-31 01:38:18.085839 | orchestrator | + image_name = (known after apply) 2026-03-31 01:38:18.085845 | orchestrator | + key_pair = "testbed" 2026-03-31 01:38:18.085851 | orchestrator | + name = "testbed-manager" 2026-03-31 01:38:18.085858 | orchestrator | + power_state = "active" 2026-03-31 01:38:18.085864 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.085870 | orchestrator | + security_groups = (known after apply) 2026-03-31 01:38:18.085912 | orchestrator | + stop_before_destroy = false 2026-03-31 01:38:18.085919 | orchestrator | + updated = (known after apply) 2026-03-31 01:38:18.085926 | orchestrator | + user_data = (sensitive value) 2026-03-31 01:38:18.085932 | orchestrator | 2026-03-31 01:38:18.085938 | orchestrator | + block_device { 2026-03-31 01:38:18.085944 | orchestrator | + boot_index = 0 2026-03-31 01:38:18.085951 | orchestrator | + delete_on_termination = false 2026-03-31 01:38:18.085957 | orchestrator | + destination_type = "volume" 2026-03-31 01:38:18.085963 | orchestrator | + multiattach = false 2026-03-31 01:38:18.085969 | orchestrator | + source_type = "volume" 2026-03-31 01:38:18.085976 | orchestrator | + uuid = (known after apply) 2026-03-31 01:38:18.085997 | orchestrator | } 2026-03-31 01:38:18.086003 | orchestrator | 2026-03-31 01:38:18.086009 | orchestrator | + network { 2026-03-31 01:38:18.086039 | orchestrator | + access_network = false 2026-03-31 01:38:18.086045 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-31 01:38:18.086052 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-31 01:38:18.086058 | orchestrator | + mac = (known after apply) 2026-03-31 01:38:18.086064 | orchestrator | + name = (known after apply) 2026-03-31 01:38:18.086071 | orchestrator | + port = (known after apply) 2026-03-31 01:38:18.086078 | orchestrator | + uuid = (known after apply) 2026-03-31 01:38:18.086084 | orchestrator | } 2026-03-31 01:38:18.086091 | orchestrator | } 2026-03-31 01:38:18.086109 | orchestrator | 2026-03-31 01:38:18.086116 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-03-31 01:38:18.086122 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-31 01:38:18.086129 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-31 01:38:18.086135 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-31 01:38:18.086142 | orchestrator | + all_metadata = (known after apply) 2026-03-31 01:38:18.086148 | orchestrator | + all_tags = (known after apply) 2026-03-31 01:38:18.086155 | orchestrator | + availability_zone = "nova" 2026-03-31 01:38:18.086161 | orchestrator | + config_drive = true 2026-03-31 01:38:18.086168 | orchestrator | + created = (known after apply) 2026-03-31 01:38:18.086174 | orchestrator | + flavor_id = (known after apply) 2026-03-31 01:38:18.086180 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-31 01:38:18.086186 | orchestrator | + force_delete = false 2026-03-31 01:38:18.086193 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-31 01:38:18.086200 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.086206 | orchestrator | + image_id = (known after apply) 2026-03-31 01:38:18.086212 | orchestrator | + image_name = (known after apply) 2026-03-31 01:38:18.086219 | orchestrator | + key_pair = "testbed" 2026-03-31 01:38:18.086226 | orchestrator | + name = "testbed-node-0" 2026-03-31 01:38:18.086232 | orchestrator | + power_state = "active" 2026-03-31 01:38:18.086238 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.086244 | orchestrator | + security_groups = (known after apply) 2026-03-31 01:38:18.086251 | orchestrator | + stop_before_destroy = false 2026-03-31 01:38:18.086257 | orchestrator | + updated = (known after apply) 2026-03-31 01:38:18.086264 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-31 01:38:18.086271 | orchestrator | 2026-03-31 01:38:18.086277 | orchestrator | + block_device { 2026-03-31 01:38:18.086284 | orchestrator | + boot_index = 0 2026-03-31 01:38:18.086291 | orchestrator | + delete_on_termination = false 2026-03-31 01:38:18.086297 | orchestrator | + destination_type = "volume" 2026-03-31 01:38:18.086304 | orchestrator | + multiattach = false 2026-03-31 01:38:18.086310 | orchestrator | + source_type = "volume" 2026-03-31 01:38:18.086317 | orchestrator | + uuid = (known after apply) 2026-03-31 01:38:18.086323 | orchestrator | } 2026-03-31 01:38:18.086329 | orchestrator | 2026-03-31 01:38:18.086335 | orchestrator | + network { 2026-03-31 01:38:18.086341 | orchestrator | + access_network = false 2026-03-31 01:38:18.086348 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-31 01:38:18.086354 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-31 01:38:18.086360 | orchestrator | + mac = (known after apply) 2026-03-31 01:38:18.086367 | orchestrator | + name = (known after apply) 2026-03-31 01:38:18.086373 | orchestrator | + port = (known after apply) 2026-03-31 01:38:18.086380 | orchestrator | + uuid = (known after apply) 2026-03-31 01:38:18.086387 | orchestrator | } 2026-03-31 01:38:18.086394 | orchestrator | } 2026-03-31 01:38:18.086400 | orchestrator | 2026-03-31 01:38:18.086406 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-03-31 01:38:18.086412 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-31 01:38:18.086418 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-31 01:38:18.086438 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-31 01:38:18.086444 | orchestrator | + all_metadata = (known after apply) 2026-03-31 01:38:18.086450 | orchestrator | + all_tags = (known after apply) 2026-03-31 01:38:18.086456 | orchestrator | + availability_zone = "nova" 2026-03-31 01:38:18.086463 | orchestrator | + config_drive = true 2026-03-31 01:38:18.086470 | orchestrator | + created = (known after apply) 2026-03-31 01:38:18.086476 | orchestrator | + flavor_id = (known after apply) 2026-03-31 01:38:18.086482 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-31 01:38:18.086488 | orchestrator | + force_delete = false 2026-03-31 01:38:18.086496 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-31 01:38:18.086502 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.086509 | orchestrator | + image_id = (known after apply) 2026-03-31 01:38:18.086515 | orchestrator | + image_name = (known after apply) 2026-03-31 01:38:18.086523 | orchestrator | + key_pair = "testbed" 2026-03-31 01:38:18.086530 | orchestrator | + name = "testbed-node-1" 2026-03-31 01:38:18.086537 | orchestrator | + power_state = "active" 2026-03-31 01:38:18.086543 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.086550 | orchestrator | + security_groups = (known after apply) 2026-03-31 01:38:18.086557 | orchestrator | + stop_before_destroy = false 2026-03-31 01:38:18.086563 | orchestrator | + updated = (known after apply) 2026-03-31 01:38:18.086576 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-31 01:38:18.086675 | orchestrator | 2026-03-31 01:38:18.086684 | orchestrator | + block_device { 2026-03-31 01:38:18.086692 | orchestrator | + boot_index = 0 2026-03-31 01:38:18.086698 | orchestrator | + delete_on_termination = false 2026-03-31 01:38:18.086704 | orchestrator | + destination_type = "volume" 2026-03-31 01:38:18.086711 | orchestrator | + multiattach = false 2026-03-31 01:38:18.086717 | orchestrator | + source_type = "volume" 2026-03-31 01:38:18.086723 | orchestrator | + uuid = (known after apply) 2026-03-31 01:38:18.086730 | orchestrator | } 2026-03-31 01:38:18.086736 | orchestrator | 2026-03-31 01:38:18.086742 | orchestrator | + network { 2026-03-31 01:38:18.086749 | orchestrator | + access_network = false 2026-03-31 01:38:18.086756 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-31 01:38:18.086763 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-31 01:38:18.086769 | orchestrator | + mac = (known after apply) 2026-03-31 01:38:18.086775 | orchestrator | + name = (known after apply) 2026-03-31 01:38:18.086781 | orchestrator | + port = (known after apply) 2026-03-31 01:38:18.086787 | orchestrator | + uuid = (known after apply) 2026-03-31 01:38:18.086794 | orchestrator | } 2026-03-31 01:38:18.086801 | orchestrator | } 2026-03-31 01:38:18.086817 | orchestrator | 2026-03-31 01:38:18.086826 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-03-31 01:38:18.086833 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-31 01:38:18.086839 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-31 01:38:18.086847 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-31 01:38:18.086856 | orchestrator | + all_metadata = (known after apply) 2026-03-31 01:38:18.086862 | orchestrator | + all_tags = (known after apply) 2026-03-31 01:38:18.086869 | orchestrator | + availability_zone = "nova" 2026-03-31 01:38:18.086875 | orchestrator | + config_drive = true 2026-03-31 01:38:18.086882 | orchestrator | + created = (known after apply) 2026-03-31 01:38:18.086888 | orchestrator | + flavor_id = (known after apply) 2026-03-31 01:38:18.086894 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-31 01:38:18.086901 | orchestrator | + force_delete = false 2026-03-31 01:38:18.086907 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-31 01:38:18.086912 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.086918 | orchestrator | + image_id = (known after apply) 2026-03-31 01:38:18.086932 | orchestrator | + image_name = (known after apply) 2026-03-31 01:38:18.086939 | orchestrator | + key_pair = "testbed" 2026-03-31 01:38:18.086945 | orchestrator | + name = "testbed-node-2" 2026-03-31 01:38:18.086951 | orchestrator | + power_state = "active" 2026-03-31 01:38:18.086958 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.086965 | orchestrator | + security_groups = (known after apply) 2026-03-31 01:38:18.086971 | orchestrator | + stop_before_destroy = false 2026-03-31 01:38:18.086978 | orchestrator | + updated = (known after apply) 2026-03-31 01:38:18.086984 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-31 01:38:18.086990 | orchestrator | 2026-03-31 01:38:18.086996 | orchestrator | + block_device { 2026-03-31 01:38:18.087003 | orchestrator | + boot_index = 0 2026-03-31 01:38:18.087009 | orchestrator | + delete_on_termination = false 2026-03-31 01:38:18.087015 | orchestrator | + destination_type = "volume" 2026-03-31 01:38:18.087022 | orchestrator | + multiattach = false 2026-03-31 01:38:18.087028 | orchestrator | + source_type = "volume" 2026-03-31 01:38:18.087033 | orchestrator | + uuid = (known after apply) 2026-03-31 01:38:18.087039 | orchestrator | } 2026-03-31 01:38:18.087045 | orchestrator | 2026-03-31 01:38:18.087051 | orchestrator | + network { 2026-03-31 01:38:18.087057 | orchestrator | + access_network = false 2026-03-31 01:38:18.087064 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-31 01:38:18.087070 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-31 01:38:18.087077 | orchestrator | + mac = (known after apply) 2026-03-31 01:38:18.087083 | orchestrator | + name = (known after apply) 2026-03-31 01:38:18.087089 | orchestrator | + port = (known after apply) 2026-03-31 01:38:18.087095 | orchestrator | + uuid = (known after apply) 2026-03-31 01:38:18.087101 | orchestrator | } 2026-03-31 01:38:18.087107 | orchestrator | } 2026-03-31 01:38:18.087115 | orchestrator | 2026-03-31 01:38:18.087127 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-03-31 01:38:18.087134 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-31 01:38:18.087140 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-31 01:38:18.087147 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-31 01:38:18.087153 | orchestrator | + all_metadata = (known after apply) 2026-03-31 01:38:18.087160 | orchestrator | + all_tags = (known after apply) 2026-03-31 01:38:18.087167 | orchestrator | + availability_zone = "nova" 2026-03-31 01:38:18.087174 | orchestrator | + config_drive = true 2026-03-31 01:38:18.087180 | orchestrator | + created = (known after apply) 2026-03-31 01:38:18.087186 | orchestrator | + flavor_id = (known after apply) 2026-03-31 01:38:18.087193 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-31 01:38:18.087199 | orchestrator | + force_delete = false 2026-03-31 01:38:18.087206 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-31 01:38:18.087213 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.087220 | orchestrator | + image_id = (known after apply) 2026-03-31 01:38:18.087226 | orchestrator | + image_name = (known after apply) 2026-03-31 01:38:18.087233 | orchestrator | + key_pair = "testbed" 2026-03-31 01:38:18.087239 | orchestrator | + name = "testbed-node-3" 2026-03-31 01:38:18.087245 | orchestrator | + power_state = "active" 2026-03-31 01:38:18.087251 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.087258 | orchestrator | + security_groups = (known after apply) 2026-03-31 01:38:18.087264 | orchestrator | + stop_before_destroy = false 2026-03-31 01:38:18.087270 | orchestrator | + updated = (known after apply) 2026-03-31 01:38:18.087276 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-31 01:38:18.087283 | orchestrator | 2026-03-31 01:38:18.087290 | orchestrator | + block_device { 2026-03-31 01:38:18.087296 | orchestrator | + boot_index = 0 2026-03-31 01:38:18.087304 | orchestrator | + delete_on_termination = false 2026-03-31 01:38:18.087310 | orchestrator | + destination_type = "volume" 2026-03-31 01:38:18.087324 | orchestrator | + multiattach = false 2026-03-31 01:38:18.087331 | orchestrator | + source_type = "volume" 2026-03-31 01:38:18.087337 | orchestrator | + uuid = (known after apply) 2026-03-31 01:38:18.087343 | orchestrator | } 2026-03-31 01:38:18.087350 | orchestrator | 2026-03-31 01:38:18.087355 | orchestrator | + network { 2026-03-31 01:38:18.087362 | orchestrator | + access_network = false 2026-03-31 01:38:18.087368 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-31 01:38:18.087374 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-31 01:38:18.087380 | orchestrator | + mac = (known after apply) 2026-03-31 01:38:18.087387 | orchestrator | + name = (known after apply) 2026-03-31 01:38:18.087394 | orchestrator | + port = (known after apply) 2026-03-31 01:38:18.087400 | orchestrator | + uuid = (known after apply) 2026-03-31 01:38:18.087405 | orchestrator | } 2026-03-31 01:38:18.087412 | orchestrator | } 2026-03-31 01:38:18.087418 | orchestrator | 2026-03-31 01:38:18.087424 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-03-31 01:38:18.087431 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-31 01:38:18.087437 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-31 01:38:18.087443 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-31 01:38:18.087449 | orchestrator | + all_metadata = (known after apply) 2026-03-31 01:38:18.087456 | orchestrator | + all_tags = (known after apply) 2026-03-31 01:38:18.087462 | orchestrator | + availability_zone = "nova" 2026-03-31 01:38:18.087468 | orchestrator | + config_drive = true 2026-03-31 01:38:18.087482 | orchestrator | + created = (known after apply) 2026-03-31 01:38:18.087489 | orchestrator | + flavor_id = (known after apply) 2026-03-31 01:38:18.087495 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-31 01:38:18.087501 | orchestrator | + force_delete = false 2026-03-31 01:38:18.087508 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-31 01:38:18.087515 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.087521 | orchestrator | + image_id = (known after apply) 2026-03-31 01:38:18.087527 | orchestrator | + image_name = (known after apply) 2026-03-31 01:38:18.087534 | orchestrator | + key_pair = "testbed" 2026-03-31 01:38:18.087540 | orchestrator | + name = "testbed-node-4" 2026-03-31 01:38:18.087547 | orchestrator | + power_state = "active" 2026-03-31 01:38:18.087552 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.087559 | orchestrator | + security_groups = (known after apply) 2026-03-31 01:38:18.087565 | orchestrator | + stop_before_destroy = false 2026-03-31 01:38:18.087571 | orchestrator | + updated = (known after apply) 2026-03-31 01:38:18.087579 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-31 01:38:18.087617 | orchestrator | 2026-03-31 01:38:18.087624 | orchestrator | + block_device { 2026-03-31 01:38:18.087631 | orchestrator | + boot_index = 0 2026-03-31 01:38:18.087638 | orchestrator | + delete_on_termination = false 2026-03-31 01:38:18.087645 | orchestrator | + destination_type = "volume" 2026-03-31 01:38:18.087650 | orchestrator | + multiattach = false 2026-03-31 01:38:18.087656 | orchestrator | + source_type = "volume" 2026-03-31 01:38:18.087662 | orchestrator | + uuid = (known after apply) 2026-03-31 01:38:18.087674 | orchestrator | } 2026-03-31 01:38:18.087681 | orchestrator | 2026-03-31 01:38:18.087688 | orchestrator | + network { 2026-03-31 01:38:18.087695 | orchestrator | + access_network = false 2026-03-31 01:38:18.087701 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-31 01:38:18.087708 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-31 01:38:18.087714 | orchestrator | + mac = (known after apply) 2026-03-31 01:38:18.087725 | orchestrator | + name = (known after apply) 2026-03-31 01:38:18.087733 | orchestrator | + port = (known after apply) 2026-03-31 01:38:18.087740 | orchestrator | + uuid = (known after apply) 2026-03-31 01:38:18.087746 | orchestrator | } 2026-03-31 01:38:18.087752 | orchestrator | } 2026-03-31 01:38:18.087768 | orchestrator | 2026-03-31 01:38:18.087774 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-03-31 01:38:18.087780 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-31 01:38:18.087786 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-31 01:38:18.087792 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-31 01:38:18.087798 | orchestrator | + all_metadata = (known after apply) 2026-03-31 01:38:18.087805 | orchestrator | + all_tags = (known after apply) 2026-03-31 01:38:18.087811 | orchestrator | + availability_zone = "nova" 2026-03-31 01:38:18.087818 | orchestrator | + config_drive = true 2026-03-31 01:38:18.087824 | orchestrator | + created = (known after apply) 2026-03-31 01:38:18.087830 | orchestrator | + flavor_id = (known after apply) 2026-03-31 01:38:18.087836 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-31 01:38:18.087842 | orchestrator | + force_delete = false 2026-03-31 01:38:18.087848 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-31 01:38:18.087854 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.087860 | orchestrator | + image_id = (known after apply) 2026-03-31 01:38:18.087866 | orchestrator | + image_name = (known after apply) 2026-03-31 01:38:18.087872 | orchestrator | + key_pair = "testbed" 2026-03-31 01:38:18.087878 | orchestrator | + name = "testbed-node-5" 2026-03-31 01:38:18.087884 | orchestrator | + power_state = "active" 2026-03-31 01:38:18.087890 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.087896 | orchestrator | + security_groups = (known after apply) 2026-03-31 01:38:18.087902 | orchestrator | + stop_before_destroy = false 2026-03-31 01:38:18.087908 | orchestrator | + updated = (known after apply) 2026-03-31 01:38:18.087914 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-31 01:38:18.087920 | orchestrator | 2026-03-31 01:38:18.087927 | orchestrator | + block_device { 2026-03-31 01:38:18.087933 | orchestrator | + boot_index = 0 2026-03-31 01:38:18.087938 | orchestrator | + delete_on_termination = false 2026-03-31 01:38:18.087945 | orchestrator | + destination_type = "volume" 2026-03-31 01:38:18.087950 | orchestrator | + multiattach = false 2026-03-31 01:38:18.087956 | orchestrator | + source_type = "volume" 2026-03-31 01:38:18.087962 | orchestrator | + uuid = (known after apply) 2026-03-31 01:38:18.087968 | orchestrator | } 2026-03-31 01:38:18.087974 | orchestrator | 2026-03-31 01:38:18.087980 | orchestrator | + network { 2026-03-31 01:38:18.087986 | orchestrator | + access_network = false 2026-03-31 01:38:18.087992 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-31 01:38:18.087998 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-31 01:38:18.088004 | orchestrator | + mac = (known after apply) 2026-03-31 01:38:18.088010 | orchestrator | + name = (known after apply) 2026-03-31 01:38:18.088016 | orchestrator | + port = (known after apply) 2026-03-31 01:38:18.088022 | orchestrator | + uuid = (known after apply) 2026-03-31 01:38:18.088028 | orchestrator | } 2026-03-31 01:38:18.088034 | orchestrator | } 2026-03-31 01:38:18.088040 | orchestrator | 2026-03-31 01:38:18.088046 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-03-31 01:38:18.088053 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-03-31 01:38:18.088059 | orchestrator | + fingerprint = (known after apply) 2026-03-31 01:38:18.088064 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.088070 | orchestrator | + name = "testbed" 2026-03-31 01:38:18.088076 | orchestrator | + private_key = (sensitive value) 2026-03-31 01:38:18.088083 | orchestrator | + public_key = (known after apply) 2026-03-31 01:38:18.088089 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.088095 | orchestrator | + user_id = (known after apply) 2026-03-31 01:38:18.088100 | orchestrator | } 2026-03-31 01:38:18.088106 | orchestrator | 2026-03-31 01:38:18.088112 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-03-31 01:38:18.088118 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-31 01:38:18.088133 | orchestrator | + device = (known after apply) 2026-03-31 01:38:18.088139 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.088145 | orchestrator | + instance_id = (known after apply) 2026-03-31 01:38:18.088150 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.088164 | orchestrator | + volume_id = (known after apply) 2026-03-31 01:38:18.088171 | orchestrator | } 2026-03-31 01:38:18.088177 | orchestrator | 2026-03-31 01:38:18.088183 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-03-31 01:38:18.088200 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-31 01:38:18.088206 | orchestrator | + device = (known after apply) 2026-03-31 01:38:18.088212 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.088218 | orchestrator | + instance_id = (known after apply) 2026-03-31 01:38:18.088224 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.088230 | orchestrator | + volume_id = (known after apply) 2026-03-31 01:38:18.088236 | orchestrator | } 2026-03-31 01:38:18.088241 | orchestrator | 2026-03-31 01:38:18.088248 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-03-31 01:38:18.088256 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-31 01:38:18.088266 | orchestrator | + device = (known after apply) 2026-03-31 01:38:18.088276 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.088283 | orchestrator | + instance_id = (known after apply) 2026-03-31 01:38:18.088289 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.088297 | orchestrator | + volume_id = (known after apply) 2026-03-31 01:38:18.088307 | orchestrator | } 2026-03-31 01:38:18.088316 | orchestrator | 2026-03-31 01:38:18.088326 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-03-31 01:38:18.088333 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-31 01:38:18.088339 | orchestrator | + device = (known after apply) 2026-03-31 01:38:18.088345 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.088352 | orchestrator | + instance_id = (known after apply) 2026-03-31 01:38:18.088358 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.088364 | orchestrator | + volume_id = (known after apply) 2026-03-31 01:38:18.088370 | orchestrator | } 2026-03-31 01:38:18.088376 | orchestrator | 2026-03-31 01:38:18.088382 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-03-31 01:38:18.088388 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-31 01:38:18.088394 | orchestrator | + device = (known after apply) 2026-03-31 01:38:18.088400 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.088407 | orchestrator | + instance_id = (known after apply) 2026-03-31 01:38:18.088413 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.088419 | orchestrator | + volume_id = (known after apply) 2026-03-31 01:38:18.088425 | orchestrator | } 2026-03-31 01:38:18.088432 | orchestrator | 2026-03-31 01:38:18.088437 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-03-31 01:38:18.088440 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-31 01:38:18.088444 | orchestrator | + device = (known after apply) 2026-03-31 01:38:18.088448 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.088452 | orchestrator | + instance_id = (known after apply) 2026-03-31 01:38:18.088456 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.088460 | orchestrator | + volume_id = (known after apply) 2026-03-31 01:38:18.088464 | orchestrator | } 2026-03-31 01:38:18.088468 | orchestrator | 2026-03-31 01:38:18.088472 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-03-31 01:38:18.088476 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-31 01:38:18.088480 | orchestrator | + device = (known after apply) 2026-03-31 01:38:18.088484 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.088487 | orchestrator | + instance_id = (known after apply) 2026-03-31 01:38:18.088491 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.088503 | orchestrator | + volume_id = (known after apply) 2026-03-31 01:38:18.088507 | orchestrator | } 2026-03-31 01:38:18.088511 | orchestrator | 2026-03-31 01:38:18.088515 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-03-31 01:38:18.088521 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-31 01:38:18.088528 | orchestrator | + device = (known after apply) 2026-03-31 01:38:18.088534 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.088541 | orchestrator | + instance_id = (known after apply) 2026-03-31 01:38:18.088547 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.088553 | orchestrator | + volume_id = (known after apply) 2026-03-31 01:38:18.088560 | orchestrator | } 2026-03-31 01:38:18.088567 | orchestrator | 2026-03-31 01:38:18.088575 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-03-31 01:38:18.088579 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-31 01:38:18.088610 | orchestrator | + device = (known after apply) 2026-03-31 01:38:18.088614 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.088620 | orchestrator | + instance_id = (known after apply) 2026-03-31 01:38:18.088627 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.088633 | orchestrator | + volume_id = (known after apply) 2026-03-31 01:38:18.088640 | orchestrator | } 2026-03-31 01:38:18.088646 | orchestrator | 2026-03-31 01:38:18.088654 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-03-31 01:38:18.088662 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-03-31 01:38:18.088669 | orchestrator | + fixed_ip = (known after apply) 2026-03-31 01:38:18.088675 | orchestrator | + floating_ip = (known after apply) 2026-03-31 01:38:18.088681 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.088687 | orchestrator | + port_id = (known after apply) 2026-03-31 01:38:18.088694 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.088700 | orchestrator | } 2026-03-31 01:38:18.088707 | orchestrator | 2026-03-31 01:38:18.088714 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-03-31 01:38:18.088721 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-03-31 01:38:18.088727 | orchestrator | + address = (known after apply) 2026-03-31 01:38:18.088734 | orchestrator | + all_tags = (known after apply) 2026-03-31 01:38:18.088748 | orchestrator | + dns_domain = (known after apply) 2026-03-31 01:38:18.088755 | orchestrator | + dns_name = (known after apply) 2026-03-31 01:38:18.088761 | orchestrator | + fixed_ip = (known after apply) 2026-03-31 01:38:18.088768 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.088775 | orchestrator | + pool = "public" 2026-03-31 01:38:18.088781 | orchestrator | + port_id = (known after apply) 2026-03-31 01:38:18.088789 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.088795 | orchestrator | + subnet_id = (known after apply) 2026-03-31 01:38:18.088801 | orchestrator | + tenant_id = (known after apply) 2026-03-31 01:38:18.088807 | orchestrator | } 2026-03-31 01:38:18.088814 | orchestrator | 2026-03-31 01:38:18.088821 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-03-31 01:38:18.088827 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-03-31 01:38:18.088834 | orchestrator | + admin_state_up = (known after apply) 2026-03-31 01:38:18.088847 | orchestrator | + all_tags = (known after apply) 2026-03-31 01:38:18.088851 | orchestrator | + availability_zone_hints = [ 2026-03-31 01:38:18.088855 | orchestrator | + "nova", 2026-03-31 01:38:18.088860 | orchestrator | ] 2026-03-31 01:38:18.088864 | orchestrator | + dns_domain = (known after apply) 2026-03-31 01:38:18.088868 | orchestrator | + external = (known after apply) 2026-03-31 01:38:18.088872 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.088876 | orchestrator | + mtu = (known after apply) 2026-03-31 01:38:18.088880 | orchestrator | + name = "net-testbed-management" 2026-03-31 01:38:18.088884 | orchestrator | + port_security_enabled = (known after apply) 2026-03-31 01:38:18.088894 | orchestrator | + qos_policy_id = (known after apply) 2026-03-31 01:38:18.088898 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.088902 | orchestrator | + shared = (known after apply) 2026-03-31 01:38:18.088906 | orchestrator | + tenant_id = (known after apply) 2026-03-31 01:38:18.088910 | orchestrator | + transparent_vlan = (known after apply) 2026-03-31 01:38:18.088914 | orchestrator | 2026-03-31 01:38:18.088918 | orchestrator | + segments (known after apply) 2026-03-31 01:38:18.088922 | orchestrator | } 2026-03-31 01:38:18.088926 | orchestrator | 2026-03-31 01:38:18.088930 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-03-31 01:38:18.088934 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-03-31 01:38:18.088939 | orchestrator | + admin_state_up = (known after apply) 2026-03-31 01:38:18.088943 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-31 01:38:18.088947 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-31 01:38:18.088951 | orchestrator | + all_tags = (known after apply) 2026-03-31 01:38:18.088955 | orchestrator | + device_id = (known after apply) 2026-03-31 01:38:18.088959 | orchestrator | + device_owner = (known after apply) 2026-03-31 01:38:18.088963 | orchestrator | + dns_assignment = (known after apply) 2026-03-31 01:38:18.088966 | orchestrator | + dns_name = (known after apply) 2026-03-31 01:38:18.088970 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.088974 | orchestrator | + mac_address = (known after apply) 2026-03-31 01:38:18.088978 | orchestrator | + network_id = (known after apply) 2026-03-31 01:38:18.088982 | orchestrator | + port_security_enabled = (known after apply) 2026-03-31 01:38:18.088986 | orchestrator | + qos_policy_id = (known after apply) 2026-03-31 01:38:18.088990 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.088994 | orchestrator | + security_group_ids = (known after apply) 2026-03-31 01:38:18.088998 | orchestrator | + tenant_id = (known after apply) 2026-03-31 01:38:18.089002 | orchestrator | 2026-03-31 01:38:18.089006 | orchestrator | + allowed_address_pairs { 2026-03-31 01:38:18.089010 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-31 01:38:18.089014 | orchestrator | } 2026-03-31 01:38:18.089018 | orchestrator | 2026-03-31 01:38:18.089022 | orchestrator | + binding (known after apply) 2026-03-31 01:38:18.089026 | orchestrator | 2026-03-31 01:38:18.089030 | orchestrator | + fixed_ip { 2026-03-31 01:38:18.089034 | orchestrator | + ip_address = "192.168.16.5" 2026-03-31 01:38:18.089038 | orchestrator | + subnet_id = (known after apply) 2026-03-31 01:38:18.089042 | orchestrator | } 2026-03-31 01:38:18.089046 | orchestrator | } 2026-03-31 01:38:18.089050 | orchestrator | 2026-03-31 01:38:18.089054 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-03-31 01:38:18.089058 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-31 01:38:18.089062 | orchestrator | + admin_state_up = (known after apply) 2026-03-31 01:38:18.089065 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-31 01:38:18.089069 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-31 01:38:18.089073 | orchestrator | + all_tags = (known after apply) 2026-03-31 01:38:18.089077 | orchestrator | + device_id = (known after apply) 2026-03-31 01:38:18.089081 | orchestrator | + device_owner = (known after apply) 2026-03-31 01:38:18.089085 | orchestrator | + dns_assignment = (known after apply) 2026-03-31 01:38:18.089089 | orchestrator | + dns_name = (known after apply) 2026-03-31 01:38:18.089093 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.089097 | orchestrator | + mac_address = (known after apply) 2026-03-31 01:38:18.089101 | orchestrator | + network_id = (known after apply) 2026-03-31 01:38:18.089105 | orchestrator | + port_security_enabled = (known after apply) 2026-03-31 01:38:18.089109 | orchestrator | + qos_policy_id = (known after apply) 2026-03-31 01:38:18.089113 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.089120 | orchestrator | + security_group_ids = (known after apply) 2026-03-31 01:38:18.089124 | orchestrator | + tenant_id = (known after apply) 2026-03-31 01:38:18.089128 | orchestrator | 2026-03-31 01:38:18.089132 | orchestrator | + allowed_address_pairs { 2026-03-31 01:38:18.089136 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-31 01:38:18.089140 | orchestrator | } 2026-03-31 01:38:18.089144 | orchestrator | + allowed_address_pairs { 2026-03-31 01:38:18.089148 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-31 01:38:18.089152 | orchestrator | } 2026-03-31 01:38:18.089156 | orchestrator | + allowed_address_pairs { 2026-03-31 01:38:18.089160 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-31 01:38:18.089164 | orchestrator | } 2026-03-31 01:38:18.089168 | orchestrator | 2026-03-31 01:38:18.089172 | orchestrator | + binding (known after apply) 2026-03-31 01:38:18.089176 | orchestrator | 2026-03-31 01:38:18.089180 | orchestrator | + fixed_ip { 2026-03-31 01:38:18.089183 | orchestrator | + ip_address = "192.168.16.10" 2026-03-31 01:38:18.089188 | orchestrator | + subnet_id = (known after apply) 2026-03-31 01:38:18.089192 | orchestrator | } 2026-03-31 01:38:18.089195 | orchestrator | } 2026-03-31 01:38:18.089199 | orchestrator | 2026-03-31 01:38:18.089204 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-03-31 01:38:18.089210 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-31 01:38:18.089221 | orchestrator | + admin_state_up = (known after apply) 2026-03-31 01:38:18.089227 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-31 01:38:18.089234 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-31 01:38:18.089240 | orchestrator | + all_tags = (known after apply) 2026-03-31 01:38:18.089247 | orchestrator | + device_id = (known after apply) 2026-03-31 01:38:18.089253 | orchestrator | + device_owner = (known after apply) 2026-03-31 01:38:18.089259 | orchestrator | + dns_assignment = (known after apply) 2026-03-31 01:38:18.089266 | orchestrator | + dns_name = (known after apply) 2026-03-31 01:38:18.089273 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.089280 | orchestrator | + mac_address = (known after apply) 2026-03-31 01:38:18.089297 | orchestrator | + network_id = (known after apply) 2026-03-31 01:38:18.089304 | orchestrator | + port_security_enabled = (known after apply) 2026-03-31 01:38:18.089308 | orchestrator | + qos_policy_id = (known after apply) 2026-03-31 01:38:18.089312 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.089316 | orchestrator | + security_group_ids = (known after apply) 2026-03-31 01:38:18.089320 | orchestrator | + tenant_id = (known after apply) 2026-03-31 01:38:18.089324 | orchestrator | 2026-03-31 01:38:18.089328 | orchestrator | + allowed_address_pairs { 2026-03-31 01:38:18.089332 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-31 01:38:18.089336 | orchestrator | } 2026-03-31 01:38:18.089340 | orchestrator | + allowed_address_pairs { 2026-03-31 01:38:18.089344 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-31 01:38:18.089348 | orchestrator | } 2026-03-31 01:38:18.089352 | orchestrator | + allowed_address_pairs { 2026-03-31 01:38:18.089356 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-31 01:38:18.089360 | orchestrator | } 2026-03-31 01:38:18.089366 | orchestrator | 2026-03-31 01:38:18.089381 | orchestrator | + binding (known after apply) 2026-03-31 01:38:18.089385 | orchestrator | 2026-03-31 01:38:18.089390 | orchestrator | + fixed_ip { 2026-03-31 01:38:18.089401 | orchestrator | + ip_address = "192.168.16.11" 2026-03-31 01:38:18.089406 | orchestrator | + subnet_id = (known after apply) 2026-03-31 01:38:18.089411 | orchestrator | } 2026-03-31 01:38:18.089415 | orchestrator | } 2026-03-31 01:38:18.089420 | orchestrator | 2026-03-31 01:38:18.089425 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-03-31 01:38:18.089429 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-31 01:38:18.089434 | orchestrator | + admin_state_up = (known after apply) 2026-03-31 01:38:18.089439 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-31 01:38:18.089444 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-31 01:38:18.089448 | orchestrator | + all_tags = (known after apply) 2026-03-31 01:38:18.089457 | orchestrator | + device_id = (known after apply) 2026-03-31 01:38:18.089462 | orchestrator | + device_owner = (known after apply) 2026-03-31 01:38:18.089467 | orchestrator | + dns_assignment = (known after apply) 2026-03-31 01:38:18.089471 | orchestrator | + dns_name = (known after apply) 2026-03-31 01:38:18.089476 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.089481 | orchestrator | + mac_address = (known after apply) 2026-03-31 01:38:18.089486 | orchestrator | + network_id = (known after apply) 2026-03-31 01:38:18.089491 | orchestrator | + port_security_enabled = (known after apply) 2026-03-31 01:38:18.089495 | orchestrator | + qos_policy_id = (known after apply) 2026-03-31 01:38:18.089500 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.089505 | orchestrator | + security_group_ids = (known after apply) 2026-03-31 01:38:18.089510 | orchestrator | + tenant_id = (known after apply) 2026-03-31 01:38:18.089514 | orchestrator | 2026-03-31 01:38:18.089521 | orchestrator | + allowed_address_pairs { 2026-03-31 01:38:18.089529 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-31 01:38:18.089535 | orchestrator | } 2026-03-31 01:38:18.089542 | orchestrator | + allowed_address_pairs { 2026-03-31 01:38:18.089550 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-31 01:38:18.089558 | orchestrator | } 2026-03-31 01:38:18.089565 | orchestrator | + allowed_address_pairs { 2026-03-31 01:38:18.089573 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-31 01:38:18.089578 | orchestrator | } 2026-03-31 01:38:18.089699 | orchestrator | 2026-03-31 01:38:18.089709 | orchestrator | + binding (known after apply) 2026-03-31 01:38:18.089713 | orchestrator | 2026-03-31 01:38:18.089718 | orchestrator | + fixed_ip { 2026-03-31 01:38:18.089723 | orchestrator | + ip_address = "192.168.16.12" 2026-03-31 01:38:18.089727 | orchestrator | + subnet_id = (known after apply) 2026-03-31 01:38:18.089732 | orchestrator | } 2026-03-31 01:38:18.089737 | orchestrator | } 2026-03-31 01:38:18.089741 | orchestrator | 2026-03-31 01:38:18.089746 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-03-31 01:38:18.089751 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-31 01:38:18.089756 | orchestrator | + admin_state_up = (known after apply) 2026-03-31 01:38:18.089760 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-31 01:38:18.089764 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-31 01:38:18.089768 | orchestrator | + all_tags = (known after apply) 2026-03-31 01:38:18.089772 | orchestrator | + device_id = (known after apply) 2026-03-31 01:38:18.089776 | orchestrator | + device_owner = (known after apply) 2026-03-31 01:38:18.089780 | orchestrator | + dns_assignment = (known after apply) 2026-03-31 01:38:18.089784 | orchestrator | + dns_name = (known after apply) 2026-03-31 01:38:18.089788 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.089792 | orchestrator | + mac_address = (known after apply) 2026-03-31 01:38:18.089796 | orchestrator | + network_id = (known after apply) 2026-03-31 01:38:18.089800 | orchestrator | + port_security_enabled = (known after apply) 2026-03-31 01:38:18.089804 | orchestrator | + qos_policy_id = (known after apply) 2026-03-31 01:38:18.089808 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.089812 | orchestrator | + security_group_ids = (known after apply) 2026-03-31 01:38:18.089816 | orchestrator | + tenant_id = (known after apply) 2026-03-31 01:38:18.089820 | orchestrator | 2026-03-31 01:38:18.089824 | orchestrator | + allowed_address_pairs { 2026-03-31 01:38:18.089828 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-31 01:38:18.089832 | orchestrator | } 2026-03-31 01:38:18.089836 | orchestrator | + allowed_address_pairs { 2026-03-31 01:38:18.089840 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-31 01:38:18.089844 | orchestrator | } 2026-03-31 01:38:18.089848 | orchestrator | + allowed_address_pairs { 2026-03-31 01:38:18.089852 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-31 01:38:18.089856 | orchestrator | } 2026-03-31 01:38:18.089860 | orchestrator | 2026-03-31 01:38:18.089870 | orchestrator | + binding (known after apply) 2026-03-31 01:38:18.089875 | orchestrator | 2026-03-31 01:38:18.089879 | orchestrator | + fixed_ip { 2026-03-31 01:38:18.089883 | orchestrator | + ip_address = "192.168.16.13" 2026-03-31 01:38:18.089887 | orchestrator | + subnet_id = (known after apply) 2026-03-31 01:38:18.089891 | orchestrator | } 2026-03-31 01:38:18.089895 | orchestrator | } 2026-03-31 01:38:18.089911 | orchestrator | 2026-03-31 01:38:18.089915 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-03-31 01:38:18.089919 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-31 01:38:18.089923 | orchestrator | + admin_state_up = (known after apply) 2026-03-31 01:38:18.089927 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-31 01:38:18.089931 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-31 01:38:18.089935 | orchestrator | + all_tags = (known after apply) 2026-03-31 01:38:18.089939 | orchestrator | + device_id = (known after apply) 2026-03-31 01:38:18.089943 | orchestrator | + device_owner = (known after apply) 2026-03-31 01:38:18.089947 | orchestrator | + dns_assignment = (known after apply) 2026-03-31 01:38:18.089956 | orchestrator | + dns_name = (known after apply) 2026-03-31 01:38:18.089969 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.089973 | orchestrator | + mac_address = (known after apply) 2026-03-31 01:38:18.089978 | orchestrator | + network_id = (known after apply) 2026-03-31 01:38:18.089982 | orchestrator | + port_security_enabled = (known after apply) 2026-03-31 01:38:18.089986 | orchestrator | + qos_policy_id = (known after apply) 2026-03-31 01:38:18.089990 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.089994 | orchestrator | + security_group_ids = (known after apply) 2026-03-31 01:38:18.089997 | orchestrator | + tenant_id = (known after apply) 2026-03-31 01:38:18.090003 | orchestrator | 2026-03-31 01:38:18.090008 | orchestrator | + allowed_address_pairs { 2026-03-31 01:38:18.090046 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-31 01:38:18.090053 | orchestrator | } 2026-03-31 01:38:18.090057 | orchestrator | + allowed_address_pairs { 2026-03-31 01:38:18.090061 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-31 01:38:18.090065 | orchestrator | } 2026-03-31 01:38:18.090069 | orchestrator | + allowed_address_pairs { 2026-03-31 01:38:18.090072 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-31 01:38:18.090076 | orchestrator | } 2026-03-31 01:38:18.090080 | orchestrator | 2026-03-31 01:38:18.090084 | orchestrator | + binding (known after apply) 2026-03-31 01:38:18.090088 | orchestrator | 2026-03-31 01:38:18.090092 | orchestrator | + fixed_ip { 2026-03-31 01:38:18.090096 | orchestrator | + ip_address = "192.168.16.14" 2026-03-31 01:38:18.090100 | orchestrator | + subnet_id = (known after apply) 2026-03-31 01:38:18.090104 | orchestrator | } 2026-03-31 01:38:18.090108 | orchestrator | } 2026-03-31 01:38:18.090111 | orchestrator | 2026-03-31 01:38:18.090115 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-03-31 01:38:18.090119 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-31 01:38:18.090123 | orchestrator | + admin_state_up = (known after apply) 2026-03-31 01:38:18.090127 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-31 01:38:18.090131 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-31 01:38:18.090135 | orchestrator | + all_tags = (known after apply) 2026-03-31 01:38:18.090139 | orchestrator | + device_id = (known after apply) 2026-03-31 01:38:18.090143 | orchestrator | + device_owner = (known after apply) 2026-03-31 01:38:18.090147 | orchestrator | + dns_assignment = (known after apply) 2026-03-31 01:38:18.090151 | orchestrator | + dns_name = (known after apply) 2026-03-31 01:38:18.090155 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.090159 | orchestrator | + mac_address = (known after apply) 2026-03-31 01:38:18.090163 | orchestrator | + network_id = (known after apply) 2026-03-31 01:38:18.090166 | orchestrator | + port_security_enabled = (known after apply) 2026-03-31 01:38:18.090170 | orchestrator | + qos_policy_id = (known after apply) 2026-03-31 01:38:18.090179 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.090183 | orchestrator | + security_group_ids = (known after apply) 2026-03-31 01:38:18.090186 | orchestrator | + tenant_id = (known after apply) 2026-03-31 01:38:18.090190 | orchestrator | 2026-03-31 01:38:18.090194 | orchestrator | + allowed_address_pairs { 2026-03-31 01:38:18.090198 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-31 01:38:18.090201 | orchestrator | } 2026-03-31 01:38:18.090205 | orchestrator | + allowed_address_pairs { 2026-03-31 01:38:18.090210 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-31 01:38:18.090214 | orchestrator | } 2026-03-31 01:38:18.090217 | orchestrator | + allowed_address_pairs { 2026-03-31 01:38:18.090221 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-31 01:38:18.090225 | orchestrator | } 2026-03-31 01:38:18.090229 | orchestrator | 2026-03-31 01:38:18.090233 | orchestrator | + binding (known after apply) 2026-03-31 01:38:18.090237 | orchestrator | 2026-03-31 01:38:18.090240 | orchestrator | + fixed_ip { 2026-03-31 01:38:18.090244 | orchestrator | + ip_address = "192.168.16.15" 2026-03-31 01:38:18.090248 | orchestrator | + subnet_id = (known after apply) 2026-03-31 01:38:18.090252 | orchestrator | } 2026-03-31 01:38:18.090256 | orchestrator | } 2026-03-31 01:38:18.090259 | orchestrator | 2026-03-31 01:38:18.090263 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-03-31 01:38:18.090267 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-03-31 01:38:18.090271 | orchestrator | + force_destroy = false 2026-03-31 01:38:18.090275 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.090279 | orchestrator | + port_id = (known after apply) 2026-03-31 01:38:18.090283 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.090286 | orchestrator | + router_id = (known after apply) 2026-03-31 01:38:18.090290 | orchestrator | + subnet_id = (known after apply) 2026-03-31 01:38:18.090294 | orchestrator | } 2026-03-31 01:38:18.090298 | orchestrator | 2026-03-31 01:38:18.090302 | orchestrator | # openstack_networking_router_v2.router will be created 2026-03-31 01:38:18.090306 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-03-31 01:38:18.090309 | orchestrator | + admin_state_up = (known after apply) 2026-03-31 01:38:18.090315 | orchestrator | + all_tags = (known after apply) 2026-03-31 01:38:18.090321 | orchestrator | + availability_zone_hints = [ 2026-03-31 01:38:18.090327 | orchestrator | + "nova", 2026-03-31 01:38:18.090333 | orchestrator | ] 2026-03-31 01:38:18.090339 | orchestrator | + distributed = (known after apply) 2026-03-31 01:38:18.090345 | orchestrator | + enable_snat = (known after apply) 2026-03-31 01:38:18.090351 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-03-31 01:38:18.090357 | orchestrator | + external_qos_policy_id = (known after apply) 2026-03-31 01:38:18.090364 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.090368 | orchestrator | + name = "testbed" 2026-03-31 01:38:18.090372 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.090376 | orchestrator | + tenant_id = (known after apply) 2026-03-31 01:38:18.090379 | orchestrator | 2026-03-31 01:38:18.090383 | orchestrator | + external_fixed_ip (known after apply) 2026-03-31 01:38:18.090387 | orchestrator | } 2026-03-31 01:38:18.090391 | orchestrator | 2026-03-31 01:38:18.090395 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-03-31 01:38:18.090399 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-03-31 01:38:18.090403 | orchestrator | + description = "ssh" 2026-03-31 01:38:18.090407 | orchestrator | + direction = "ingress" 2026-03-31 01:38:18.090410 | orchestrator | + ethertype = "IPv4" 2026-03-31 01:38:18.090414 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.090418 | orchestrator | + port_range_max = 22 2026-03-31 01:38:18.090422 | orchestrator | + port_range_min = 22 2026-03-31 01:38:18.090425 | orchestrator | + protocol = "tcp" 2026-03-31 01:38:18.090429 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.090438 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-31 01:38:18.090442 | orchestrator | + remote_group_id = (known after apply) 2026-03-31 01:38:18.090452 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-31 01:38:18.090456 | orchestrator | + security_group_id = (known after apply) 2026-03-31 01:38:18.090459 | orchestrator | + tenant_id = (known after apply) 2026-03-31 01:38:18.090463 | orchestrator | } 2026-03-31 01:38:18.090467 | orchestrator | 2026-03-31 01:38:18.090471 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-03-31 01:38:18.090478 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-03-31 01:38:18.090485 | orchestrator | + description = "wireguard" 2026-03-31 01:38:18.090490 | orchestrator | + direction = "ingress" 2026-03-31 01:38:18.090494 | orchestrator | + ethertype = "IPv4" 2026-03-31 01:38:18.090498 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.090501 | orchestrator | + port_range_max = 51820 2026-03-31 01:38:18.090505 | orchestrator | + port_range_min = 51820 2026-03-31 01:38:18.090509 | orchestrator | + protocol = "udp" 2026-03-31 01:38:18.090512 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.090524 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-31 01:38:18.090529 | orchestrator | + remote_group_id = (known after apply) 2026-03-31 01:38:18.090533 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-31 01:38:18.090537 | orchestrator | + security_group_id = (known after apply) 2026-03-31 01:38:18.090540 | orchestrator | + tenant_id = (known after apply) 2026-03-31 01:38:18.090544 | orchestrator | } 2026-03-31 01:38:18.090548 | orchestrator | 2026-03-31 01:38:18.090552 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-03-31 01:38:18.090556 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-03-31 01:38:18.090564 | orchestrator | + direction = "ingress" 2026-03-31 01:38:18.090568 | orchestrator | + ethertype = "IPv4" 2026-03-31 01:38:18.090572 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.090576 | orchestrator | + protocol = "tcp" 2026-03-31 01:38:18.090597 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.090604 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-31 01:38:18.090611 | orchestrator | + remote_group_id = (known after apply) 2026-03-31 01:38:18.090615 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-31 01:38:18.090618 | orchestrator | + security_group_id = (known after apply) 2026-03-31 01:38:18.090622 | orchestrator | + tenant_id = (known after apply) 2026-03-31 01:38:18.090626 | orchestrator | } 2026-03-31 01:38:18.090630 | orchestrator | 2026-03-31 01:38:18.090633 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-03-31 01:38:18.090637 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-03-31 01:38:18.090641 | orchestrator | + direction = "ingress" 2026-03-31 01:38:18.090645 | orchestrator | + ethertype = "IPv4" 2026-03-31 01:38:18.090648 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.090652 | orchestrator | + protocol = "udp" 2026-03-31 01:38:18.090656 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.090660 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-31 01:38:18.090664 | orchestrator | + remote_group_id = (known after apply) 2026-03-31 01:38:18.090668 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-31 01:38:18.090674 | orchestrator | + security_group_id = (known after apply) 2026-03-31 01:38:18.090679 | orchestrator | + tenant_id = (known after apply) 2026-03-31 01:38:18.090686 | orchestrator | } 2026-03-31 01:38:18.090692 | orchestrator | 2026-03-31 01:38:18.090697 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-03-31 01:38:18.090706 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-03-31 01:38:18.090710 | orchestrator | + direction = "ingress" 2026-03-31 01:38:18.090714 | orchestrator | + ethertype = "IPv4" 2026-03-31 01:38:18.090717 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.090721 | orchestrator | + protocol = "icmp" 2026-03-31 01:38:18.090725 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.090729 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-31 01:38:18.090733 | orchestrator | + remote_group_id = (known after apply) 2026-03-31 01:38:18.090737 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-31 01:38:18.090740 | orchestrator | + security_group_id = (known after apply) 2026-03-31 01:38:18.090744 | orchestrator | + tenant_id = (known after apply) 2026-03-31 01:38:18.090748 | orchestrator | } 2026-03-31 01:38:18.090751 | orchestrator | 2026-03-31 01:38:18.090756 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-03-31 01:38:18.090759 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-03-31 01:38:18.090763 | orchestrator | + direction = "ingress" 2026-03-31 01:38:18.090767 | orchestrator | + ethertype = "IPv4" 2026-03-31 01:38:18.090771 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.090775 | orchestrator | + protocol = "tcp" 2026-03-31 01:38:18.090779 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.090782 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-31 01:38:18.090786 | orchestrator | + remote_group_id = (known after apply) 2026-03-31 01:38:18.090790 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-31 01:38:18.090794 | orchestrator | + security_group_id = (known after apply) 2026-03-31 01:38:18.090798 | orchestrator | + tenant_id = (known after apply) 2026-03-31 01:38:18.090802 | orchestrator | } 2026-03-31 01:38:18.090805 | orchestrator | 2026-03-31 01:38:18.090809 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-03-31 01:38:18.090813 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-03-31 01:38:18.090817 | orchestrator | + direction = "ingress" 2026-03-31 01:38:18.090820 | orchestrator | + ethertype = "IPv4" 2026-03-31 01:38:18.090824 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.090828 | orchestrator | + protocol = "udp" 2026-03-31 01:38:18.090833 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.090845 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-31 01:38:18.090851 | orchestrator | + remote_group_id = (known after apply) 2026-03-31 01:38:18.090855 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-31 01:38:18.090858 | orchestrator | + security_group_id = (known after apply) 2026-03-31 01:38:18.090862 | orchestrator | + tenant_id = (known after apply) 2026-03-31 01:38:18.090866 | orchestrator | } 2026-03-31 01:38:18.090870 | orchestrator | 2026-03-31 01:38:18.090874 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-03-31 01:38:18.090878 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-03-31 01:38:18.090882 | orchestrator | + direction = "ingress" 2026-03-31 01:38:18.090887 | orchestrator | + ethertype = "IPv4" 2026-03-31 01:38:18.090894 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.090900 | orchestrator | + protocol = "icmp" 2026-03-31 01:38:18.090904 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.090908 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-31 01:38:18.090912 | orchestrator | + remote_group_id = (known after apply) 2026-03-31 01:38:18.090916 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-31 01:38:18.090919 | orchestrator | + security_group_id = (known after apply) 2026-03-31 01:38:18.090923 | orchestrator | + tenant_id = (known after apply) 2026-03-31 01:38:18.090931 | orchestrator | } 2026-03-31 01:38:18.090935 | orchestrator | 2026-03-31 01:38:18.090939 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-03-31 01:38:18.090943 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-03-31 01:38:18.090946 | orchestrator | + description = "vrrp" 2026-03-31 01:38:18.090950 | orchestrator | + direction = "ingress" 2026-03-31 01:38:18.090954 | orchestrator | + ethertype = "IPv4" 2026-03-31 01:38:18.090958 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.090961 | orchestrator | + protocol = "112" 2026-03-31 01:38:18.090965 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.090969 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-31 01:38:18.090973 | orchestrator | + remote_group_id = (known after apply) 2026-03-31 01:38:18.090976 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-31 01:38:18.090980 | orchestrator | + security_group_id = (known after apply) 2026-03-31 01:38:18.090984 | orchestrator | + tenant_id = (known after apply) 2026-03-31 01:38:18.090988 | orchestrator | } 2026-03-31 01:38:18.090991 | orchestrator | 2026-03-31 01:38:18.090995 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-03-31 01:38:18.090999 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-03-31 01:38:18.091003 | orchestrator | + all_tags = (known after apply) 2026-03-31 01:38:18.091006 | orchestrator | + description = "management security group" 2026-03-31 01:38:18.091010 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.091014 | orchestrator | + name = "testbed-management" 2026-03-31 01:38:18.091017 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.091021 | orchestrator | + stateful = (known after apply) 2026-03-31 01:38:18.091025 | orchestrator | + tenant_id = (known after apply) 2026-03-31 01:38:18.091029 | orchestrator | } 2026-03-31 01:38:18.091033 | orchestrator | 2026-03-31 01:38:18.091037 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-03-31 01:38:18.091040 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-03-31 01:38:18.091044 | orchestrator | + all_tags = (known after apply) 2026-03-31 01:38:18.091048 | orchestrator | + description = "node security group" 2026-03-31 01:38:18.091052 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.091056 | orchestrator | + name = "testbed-node" 2026-03-31 01:38:18.091059 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.091063 | orchestrator | + stateful = (known after apply) 2026-03-31 01:38:18.091067 | orchestrator | + tenant_id = (known after apply) 2026-03-31 01:38:18.091071 | orchestrator | } 2026-03-31 01:38:18.091074 | orchestrator | 2026-03-31 01:38:18.091078 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-03-31 01:38:18.091082 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-03-31 01:38:18.091085 | orchestrator | + all_tags = (known after apply) 2026-03-31 01:38:18.091089 | orchestrator | + cidr = "192.168.16.0/20" 2026-03-31 01:38:18.091093 | orchestrator | + dns_nameservers = [ 2026-03-31 01:38:18.091097 | orchestrator | + "8.8.8.8", 2026-03-31 01:38:18.091101 | orchestrator | + "9.9.9.9", 2026-03-31 01:38:18.091105 | orchestrator | ] 2026-03-31 01:38:18.091109 | orchestrator | + enable_dhcp = true 2026-03-31 01:38:18.091112 | orchestrator | + gateway_ip = (known after apply) 2026-03-31 01:38:18.091120 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.091124 | orchestrator | + ip_version = 4 2026-03-31 01:38:18.091128 | orchestrator | + ipv6_address_mode = (known after apply) 2026-03-31 01:38:18.091132 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-03-31 01:38:18.091136 | orchestrator | + name = "subnet-testbed-management" 2026-03-31 01:38:18.091139 | orchestrator | + network_id = (known after apply) 2026-03-31 01:38:18.091143 | orchestrator | + no_gateway = false 2026-03-31 01:38:18.091147 | orchestrator | + region = (known after apply) 2026-03-31 01:38:18.091151 | orchestrator | + service_types = (known after apply) 2026-03-31 01:38:18.091158 | orchestrator | + tenant_id = (known after apply) 2026-03-31 01:38:18.091162 | orchestrator | 2026-03-31 01:38:18.091166 | orchestrator | + allocation_pool { 2026-03-31 01:38:18.091170 | orchestrator | + end = "192.168.31.250" 2026-03-31 01:38:18.091174 | orchestrator | + start = "192.168.31.200" 2026-03-31 01:38:18.091178 | orchestrator | } 2026-03-31 01:38:18.091182 | orchestrator | } 2026-03-31 01:38:18.091185 | orchestrator | 2026-03-31 01:38:18.091189 | orchestrator | # terraform_data.image will be created 2026-03-31 01:38:18.091193 | orchestrator | + resource "terraform_data" "image" { 2026-03-31 01:38:18.091197 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.091200 | orchestrator | + input = "Ubuntu 24.04" 2026-03-31 01:38:18.091204 | orchestrator | + output = (known after apply) 2026-03-31 01:38:18.091208 | orchestrator | } 2026-03-31 01:38:18.091212 | orchestrator | 2026-03-31 01:38:18.091215 | orchestrator | # terraform_data.image_node will be created 2026-03-31 01:38:18.091219 | orchestrator | + resource "terraform_data" "image_node" { 2026-03-31 01:38:18.091223 | orchestrator | + id = (known after apply) 2026-03-31 01:38:18.091226 | orchestrator | + input = "Ubuntu 24.04" 2026-03-31 01:38:18.091230 | orchestrator | + output = (known after apply) 2026-03-31 01:38:18.091234 | orchestrator | } 2026-03-31 01:38:18.091238 | orchestrator | 2026-03-31 01:38:18.091242 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-03-31 01:38:18.091245 | orchestrator | 2026-03-31 01:38:18.091249 | orchestrator | Changes to Outputs: 2026-03-31 01:38:18.091253 | orchestrator | + manager_address = (sensitive value) 2026-03-31 01:38:18.091257 | orchestrator | + private_key = (sensitive value) 2026-03-31 01:38:18.312409 | orchestrator | terraform_data.image_node: Creating... 2026-03-31 01:38:18.313298 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=2b41932a-cf70-8a3d-30af-9ac7a2a065b5] 2026-03-31 01:38:18.313347 | orchestrator | terraform_data.image: Creating... 2026-03-31 01:38:18.314095 | orchestrator | terraform_data.image: Creation complete after 0s [id=c0d25322-7c96-c6fa-721f-c36bee79ba2d] 2026-03-31 01:38:18.336734 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-03-31 01:38:18.337153 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-03-31 01:38:18.342508 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-03-31 01:38:18.344254 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-03-31 01:38:18.347649 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-03-31 01:38:18.349502 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-03-31 01:38:18.350490 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-03-31 01:38:18.351177 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-03-31 01:38:18.353094 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-03-31 01:38:18.353804 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-03-31 01:38:18.791187 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-31 01:38:18.797643 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-03-31 01:38:18.798720 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-31 01:38:18.804299 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-03-31 01:38:18.810853 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-03-31 01:38:18.816368 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-03-31 01:38:19.359223 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=ad1bdfdf-8456-4569-a958-ffd747dd2bee] 2026-03-31 01:38:19.366219 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-03-31 01:38:21.960326 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=5a64e844-a251-4ee7-a817-d55da64d6351] 2026-03-31 01:38:21.969962 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-03-31 01:38:21.972299 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=aca90cda-810a-4a3a-a8a4-a9246b552814] 2026-03-31 01:38:21.978396 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-03-31 01:38:21.981571 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=d1382055-b12a-4a0d-90b0-6b0bf5b2002d] 2026-03-31 01:38:21.988218 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-03-31 01:38:21.999810 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=c466d3ef-6614-47a1-86d1-ef83336ce84c] 2026-03-31 01:38:22.006224 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=820fa545-b298-47e1-b072-447ef233e5c9] 2026-03-31 01:38:22.006545 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-03-31 01:38:22.011142 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-03-31 01:38:22.042434 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=cee620fc-9fd6-4c5e-b237-9b955e0088ae] 2026-03-31 01:38:22.046626 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-03-31 01:38:22.054871 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=a878a648-90f8-45a8-8930-74e801ae2e4e] 2026-03-31 01:38:22.065123 | orchestrator | local_file.id_rsa_pub: Creating... 2026-03-31 01:38:22.069357 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=cfb21940386332d0d430b351fe1325cfccd1c704] 2026-03-31 01:38:22.080898 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-03-31 01:38:22.081792 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=0036be6c-41d0-4a1c-804a-c8bed222bda7] 2026-03-31 01:38:22.086108 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=2cb44897f34b84bdfdbcca43a27551942a473a95] 2026-03-31 01:38:22.089407 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-03-31 01:38:22.093742 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=627ac388-afe2-405e-bfb6-93a96eeb5247] 2026-03-31 01:38:22.716387 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=47a85f4c-1e56-4b37-90fc-526aac14af8e] 2026-03-31 01:38:23.344404 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=a4a87284-07a8-4e27-8820-2c57f4d29788] 2026-03-31 01:38:23.350347 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-03-31 01:38:25.383441 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=9459331e-414f-4bad-a4cf-8aef28266031] 2026-03-31 01:38:25.427541 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=53e77e6d-528f-491f-9dcc-6d0bc8238047] 2026-03-31 01:38:25.436971 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=49050c5a-8b56-4e13-a731-86d499e8d1b4] 2026-03-31 01:38:25.448992 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=61782125-295c-4c38-b420-ceea0e244801] 2026-03-31 01:38:25.462753 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=f91d726b-9268-46b5-b001-d0963ab9d126] 2026-03-31 01:38:25.472967 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=972f9726-ae68-4000-ae51-611d4e82d0e5] 2026-03-31 01:38:26.028379 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=4c067e2a-dcbf-4c2c-9def-9ff8c01e950e] 2026-03-31 01:38:26.033039 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-03-31 01:38:26.033730 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-03-31 01:38:26.037063 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-03-31 01:38:26.211382 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=fa5ea434-63dd-4823-8be2-3d82dba5adff] 2026-03-31 01:38:26.224499 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-03-31 01:38:26.231669 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-03-31 01:38:26.231762 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-03-31 01:38:26.232593 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-03-31 01:38:26.235040 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-03-31 01:38:26.236017 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-03-31 01:38:26.258724 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=e9a05d12-ef60-47cb-8d3c-9d08713b2f6b] 2026-03-31 01:38:26.264828 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-03-31 01:38:26.265448 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-03-31 01:38:26.266830 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-03-31 01:38:26.408686 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=aa87000e-cce4-416e-95bd-1e6434476e9a] 2026-03-31 01:38:26.419172 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-03-31 01:38:26.600558 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=7d169524-215b-4896-9dfd-743e8133198e] 2026-03-31 01:38:26.615864 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-03-31 01:38:26.792028 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=d42993f7-df7c-4ea4-a222-995a17b6a36d] 2026-03-31 01:38:26.801731 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-03-31 01:38:26.821402 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=ecef56a8-6897-49da-86f9-efc0caa8fed1] 2026-03-31 01:38:26.831962 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-03-31 01:38:26.935911 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=19c0b8d4-bcd6-468e-8552-6ede92048ba0] 2026-03-31 01:38:26.943712 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-03-31 01:38:27.062519 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=db24c15a-1f41-410e-8cfc-90351923b037] 2026-03-31 01:38:27.076906 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-03-31 01:38:27.268090 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=c2525d23-b09a-4b35-98a8-ffeec867bbdc] 2026-03-31 01:38:27.272177 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 0s [id=6a969a3a-2bef-476b-9ec5-50474584d71e] 2026-03-31 01:38:27.278677 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-03-31 01:38:27.349336 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 0s [id=cf5c59b2-c807-4c91-bb19-00674e01bfc8] 2026-03-31 01:38:27.383187 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 0s [id=c53f51cd-ad88-4e8b-9dd6-d055866c6625] 2026-03-31 01:38:27.419363 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=98d5b606-9321-4d1e-baff-c9fbb8e501aa] 2026-03-31 01:38:27.569081 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 2s [id=5b0d7c62-2ba3-49e2-b23b-df3405c8e496] 2026-03-31 01:38:27.712940 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 2s [id=ac4f7bf3-1fed-4394-b704-0361a3989937] 2026-03-31 01:38:27.871519 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=bd11d251-bd45-4708-87ed-55f2c6b1e5ca] 2026-03-31 01:38:27.922648 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=f110b1d2-3c0d-4228-881b-d109ed930bf3] 2026-03-31 01:38:28.225083 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=00a2fd43-22e9-4648-ae0b-46a0ee35e86c] 2026-03-31 01:38:29.250582 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=f3f95ccc-4253-4c8c-a101-40d67f8facca] 2026-03-31 01:38:29.277588 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-03-31 01:38:29.286731 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-03-31 01:38:29.296755 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-03-31 01:38:29.296896 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-03-31 01:38:29.304211 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-03-31 01:38:29.310745 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-03-31 01:38:29.310936 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-03-31 01:38:31.065851 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=c06919c0-ea84-4087-855f-9ec8572d4834] 2026-03-31 01:38:31.074933 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-03-31 01:38:31.079576 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-03-31 01:38:31.084118 | orchestrator | local_file.inventory: Creating... 2026-03-31 01:38:31.085716 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=6baf7c71ab424947fe8469850a3e853bde23ce84] 2026-03-31 01:38:31.091454 | orchestrator | local_file.inventory: Creation complete after 0s [id=f1c7049dce238394fd7ec38823512abde36cbbbc] 2026-03-31 01:38:31.820585 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=c06919c0-ea84-4087-855f-9ec8572d4834] 2026-03-31 01:38:39.287885 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-03-31 01:38:39.298133 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-03-31 01:38:39.299248 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-03-31 01:38:39.306712 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-03-31 01:38:39.312266 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-03-31 01:38:39.312354 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-03-31 01:38:49.288650 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-03-31 01:38:49.298831 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-03-31 01:38:49.299943 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-03-31 01:38:49.307224 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-03-31 01:38:49.312592 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-03-31 01:38:49.312651 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-03-31 01:38:49.679942 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 21s [id=60fa0749-3523-46a2-8d3d-1862be7ca780] 2026-03-31 01:38:49.697227 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 21s [id=06ec6698-6f8f-419b-ad14-46ef929c0c81] 2026-03-31 01:38:49.787099 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 21s [id=0adcd3f2-442e-46f5-aacd-6ca1d7e23fb8] 2026-03-31 01:38:59.299221 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-03-31 01:38:59.308422 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-03-31 01:38:59.313743 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-03-31 01:39:00.004552 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=eea20814-b90c-4058-b199-2878fd063ab6] 2026-03-31 01:39:00.022234 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=fe3ed7a8-edb9-45bf-afeb-45a2ee35d7e1] 2026-03-31 01:39:00.051654 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=11f85fc0-fa01-46ec-86b6-80b9ebce2901] 2026-03-31 01:39:00.069877 | orchestrator | null_resource.node_semaphore: Creating... 2026-03-31 01:39:00.084836 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=4492090419525738589] 2026-03-31 01:39:00.109471 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-03-31 01:39:00.110135 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-03-31 01:39:00.113256 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-03-31 01:39:00.113352 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-03-31 01:39:00.117681 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-03-31 01:39:00.119638 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-03-31 01:39:00.130289 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-03-31 01:39:00.135108 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-03-31 01:39:00.135257 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-03-31 01:39:00.149261 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-03-31 01:39:03.519579 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=eea20814-b90c-4058-b199-2878fd063ab6/a878a648-90f8-45a8-8930-74e801ae2e4e] 2026-03-31 01:39:03.533548 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=fe3ed7a8-edb9-45bf-afeb-45a2ee35d7e1/5a64e844-a251-4ee7-a817-d55da64d6351] 2026-03-31 01:39:03.562309 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=0adcd3f2-442e-46f5-aacd-6ca1d7e23fb8/d1382055-b12a-4a0d-90b0-6b0bf5b2002d] 2026-03-31 01:39:03.587953 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=eea20814-b90c-4058-b199-2878fd063ab6/c466d3ef-6614-47a1-86d1-ef83336ce84c] 2026-03-31 01:39:03.598492 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=fe3ed7a8-edb9-45bf-afeb-45a2ee35d7e1/aca90cda-810a-4a3a-a8a4-a9246b552814] 2026-03-31 01:39:03.635250 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=0adcd3f2-442e-46f5-aacd-6ca1d7e23fb8/0036be6c-41d0-4a1c-804a-c8bed222bda7] 2026-03-31 01:39:04.969620 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 5s [id=fe3ed7a8-edb9-45bf-afeb-45a2ee35d7e1/627ac388-afe2-405e-bfb6-93a96eeb5247] 2026-03-31 01:39:09.689193 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 10s [id=eea20814-b90c-4058-b199-2878fd063ab6/820fa545-b298-47e1-b072-447ef233e5c9] 2026-03-31 01:39:09.702171 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 10s [id=0adcd3f2-442e-46f5-aacd-6ca1d7e23fb8/cee620fc-9fd6-4c5e-b237-9b955e0088ae] 2026-03-31 01:39:10.150109 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-03-31 01:39:20.150630 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-03-31 01:39:21.056928 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=2a3f973b-c47f-44b9-b9d8-72c8a81783f9] 2026-03-31 01:39:21.067922 | orchestrator | 2026-03-31 01:39:21.067968 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-03-31 01:39:21.067998 | orchestrator | 2026-03-31 01:39:21.068008 | orchestrator | Outputs: 2026-03-31 01:39:21.068016 | orchestrator | 2026-03-31 01:39:21.068041 | orchestrator | manager_address = 2026-03-31 01:39:21.068051 | orchestrator | private_key = 2026-03-31 01:39:21.151672 | orchestrator | ok: Runtime: 0:01:09.865035 2026-03-31 01:39:21.176025 | 2026-03-31 01:39:21.176153 | TASK [Fetch manager address] 2026-03-31 01:39:21.588591 | orchestrator | ok 2026-03-31 01:39:21.598421 | 2026-03-31 01:39:21.598564 | TASK [Set manager_host address] 2026-03-31 01:39:21.680596 | orchestrator | ok 2026-03-31 01:39:21.688071 | 2026-03-31 01:39:21.688260 | LOOP [Update ansible collections] 2026-03-31 01:39:22.448697 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-31 01:39:22.449099 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-31 01:39:22.449165 | orchestrator | Starting galaxy collection install process 2026-03-31 01:39:22.449197 | orchestrator | Process install dependency map 2026-03-31 01:39:22.449223 | orchestrator | Starting collection install process 2026-03-31 01:39:22.449259 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2026-03-31 01:39:22.449288 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2026-03-31 01:39:22.449317 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-03-31 01:39:22.449379 | orchestrator | ok: Item: commons Runtime: 0:00:00.466580 2026-03-31 01:39:23.221421 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-31 01:39:23.221597 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-31 01:39:23.221655 | orchestrator | Starting galaxy collection install process 2026-03-31 01:39:23.221696 | orchestrator | Process install dependency map 2026-03-31 01:39:23.221734 | orchestrator | Starting collection install process 2026-03-31 01:39:23.221769 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2026-03-31 01:39:23.221803 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2026-03-31 01:39:23.221838 | orchestrator | osism.services:999.0.0 was installed successfully 2026-03-31 01:39:23.221939 | orchestrator | ok: Item: services Runtime: 0:00:00.522136 2026-03-31 01:39:23.235783 | 2026-03-31 01:39:23.236904 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-31 01:39:33.882314 | orchestrator | ok 2026-03-31 01:39:33.891951 | 2026-03-31 01:39:33.892086 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-31 01:40:33.938565 | orchestrator | ok 2026-03-31 01:40:33.949034 | 2026-03-31 01:40:33.949161 | TASK [Fetch manager ssh hostkey] 2026-03-31 01:40:35.537820 | orchestrator | Output suppressed because no_log was given 2026-03-31 01:40:35.554510 | 2026-03-31 01:40:35.554681 | TASK [Get ssh keypair from terraform environment] 2026-03-31 01:40:36.092265 | orchestrator | ok: Runtime: 0:00:00.009528 2026-03-31 01:40:36.109733 | 2026-03-31 01:40:36.109923 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-31 01:40:36.157337 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-03-31 01:40:36.167648 | 2026-03-31 01:40:36.167797 | TASK [Run manager part 0] 2026-03-31 01:40:37.031312 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-31 01:40:37.076300 | orchestrator | 2026-03-31 01:40:37.076346 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-03-31 01:40:37.076353 | orchestrator | 2026-03-31 01:40:37.076366 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-03-31 01:40:39.050768 | orchestrator | ok: [testbed-manager] 2026-03-31 01:40:39.050846 | orchestrator | 2026-03-31 01:40:39.050885 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-31 01:40:39.050898 | orchestrator | 2026-03-31 01:40:39.050911 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-31 01:40:41.068722 | orchestrator | ok: [testbed-manager] 2026-03-31 01:40:41.068883 | orchestrator | 2026-03-31 01:40:41.068906 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-31 01:40:41.785263 | orchestrator | ok: [testbed-manager] 2026-03-31 01:40:41.785360 | orchestrator | 2026-03-31 01:40:41.785380 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-31 01:40:41.844306 | orchestrator | skipping: [testbed-manager] 2026-03-31 01:40:41.844390 | orchestrator | 2026-03-31 01:40:41.844414 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-03-31 01:40:41.879855 | orchestrator | skipping: [testbed-manager] 2026-03-31 01:40:41.879904 | orchestrator | 2026-03-31 01:40:41.879913 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-03-31 01:40:41.926643 | orchestrator | skipping: [testbed-manager] 2026-03-31 01:40:41.926703 | orchestrator | 2026-03-31 01:40:41.926713 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-03-31 01:40:42.701346 | orchestrator | changed: [testbed-manager] 2026-03-31 01:40:42.701461 | orchestrator | 2026-03-31 01:40:42.701489 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-03-31 01:43:48.046585 | orchestrator | changed: [testbed-manager] 2026-03-31 01:43:48.046645 | orchestrator | 2026-03-31 01:43:48.046658 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-31 01:45:25.496238 | orchestrator | changed: [testbed-manager] 2026-03-31 01:45:25.496334 | orchestrator | 2026-03-31 01:45:25.496349 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-31 01:45:50.278232 | orchestrator | changed: [testbed-manager] 2026-03-31 01:45:50.278305 | orchestrator | 2026-03-31 01:45:50.278316 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-31 01:46:00.164521 | orchestrator | changed: [testbed-manager] 2026-03-31 01:46:00.164702 | orchestrator | 2026-03-31 01:46:00.164736 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-31 01:46:00.219097 | orchestrator | ok: [testbed-manager] 2026-03-31 01:46:00.219194 | orchestrator | 2026-03-31 01:46:00.219215 | orchestrator | TASK [Get current user] ******************************************************** 2026-03-31 01:46:01.027158 | orchestrator | ok: [testbed-manager] 2026-03-31 01:46:01.027229 | orchestrator | 2026-03-31 01:46:01.027239 | orchestrator | TASK [Create venv directory] *************************************************** 2026-03-31 01:46:01.813056 | orchestrator | changed: [testbed-manager] 2026-03-31 01:46:01.813157 | orchestrator | 2026-03-31 01:46:01.813177 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-03-31 01:46:08.523520 | orchestrator | changed: [testbed-manager] 2026-03-31 01:46:08.523582 | orchestrator | 2026-03-31 01:46:08.523658 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-03-31 01:46:14.786834 | orchestrator | changed: [testbed-manager] 2026-03-31 01:46:14.786938 | orchestrator | 2026-03-31 01:46:14.786956 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-03-31 01:46:17.858319 | orchestrator | changed: [testbed-manager] 2026-03-31 01:46:17.859290 | orchestrator | 2026-03-31 01:46:17.859331 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-03-31 01:46:19.873334 | orchestrator | changed: [testbed-manager] 2026-03-31 01:46:19.873378 | orchestrator | 2026-03-31 01:46:19.873388 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-03-31 01:46:21.073534 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-31 01:46:21.073685 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-31 01:46:21.073708 | orchestrator | 2026-03-31 01:46:21.073731 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-03-31 01:46:21.115008 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-31 01:46:21.115084 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-31 01:46:21.115093 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-31 01:46:21.115103 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-31 01:46:24.529067 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-31 01:46:24.529187 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-31 01:46:24.529215 | orchestrator | 2026-03-31 01:46:24.529229 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-03-31 01:46:25.157345 | orchestrator | changed: [testbed-manager] 2026-03-31 01:46:25.157388 | orchestrator | 2026-03-31 01:46:25.157396 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-03-31 01:47:45.553922 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-03-31 01:47:45.553992 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-03-31 01:47:45.554006 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-03-31 01:47:45.554066 | orchestrator | 2026-03-31 01:47:45.554082 | orchestrator | TASK [Install local collections] *********************************************** 2026-03-31 01:47:48.244563 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-03-31 01:47:48.244680 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-03-31 01:47:48.244705 | orchestrator | 2026-03-31 01:47:48.244721 | orchestrator | PLAY [Create operator user] **************************************************** 2026-03-31 01:47:48.244734 | orchestrator | 2026-03-31 01:47:48.244746 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-31 01:47:49.747768 | orchestrator | ok: [testbed-manager] 2026-03-31 01:47:49.747923 | orchestrator | 2026-03-31 01:47:49.747945 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-31 01:47:49.796861 | orchestrator | ok: [testbed-manager] 2026-03-31 01:47:49.796957 | orchestrator | 2026-03-31 01:47:49.796978 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-31 01:47:49.859873 | orchestrator | ok: [testbed-manager] 2026-03-31 01:47:49.859970 | orchestrator | 2026-03-31 01:47:49.859987 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-31 01:47:50.710669 | orchestrator | changed: [testbed-manager] 2026-03-31 01:47:50.710748 | orchestrator | 2026-03-31 01:47:50.710761 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-31 01:47:51.464212 | orchestrator | changed: [testbed-manager] 2026-03-31 01:47:51.464306 | orchestrator | 2026-03-31 01:47:51.464324 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-31 01:47:52.857769 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-03-31 01:47:52.857876 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-03-31 01:47:52.857891 | orchestrator | 2026-03-31 01:47:52.857901 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-31 01:47:54.359456 | orchestrator | changed: [testbed-manager] 2026-03-31 01:47:54.359562 | orchestrator | 2026-03-31 01:47:54.359580 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-31 01:47:56.214590 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-03-31 01:47:56.214632 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-03-31 01:47:56.214647 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-03-31 01:47:56.214653 | orchestrator | 2026-03-31 01:47:56.214660 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-31 01:47:56.274507 | orchestrator | skipping: [testbed-manager] 2026-03-31 01:47:56.274549 | orchestrator | 2026-03-31 01:47:56.274558 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-31 01:47:56.343336 | orchestrator | skipping: [testbed-manager] 2026-03-31 01:47:56.343380 | orchestrator | 2026-03-31 01:47:56.343390 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-31 01:47:56.963110 | orchestrator | changed: [testbed-manager] 2026-03-31 01:47:56.963213 | orchestrator | 2026-03-31 01:47:56.963230 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-31 01:47:57.036231 | orchestrator | skipping: [testbed-manager] 2026-03-31 01:47:57.036341 | orchestrator | 2026-03-31 01:47:57.036365 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-31 01:47:57.959590 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-31 01:47:57.959690 | orchestrator | changed: [testbed-manager] 2026-03-31 01:47:57.959708 | orchestrator | 2026-03-31 01:47:57.959721 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-31 01:47:58.000495 | orchestrator | skipping: [testbed-manager] 2026-03-31 01:47:58.000589 | orchestrator | 2026-03-31 01:47:58.000606 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-31 01:47:58.042186 | orchestrator | skipping: [testbed-manager] 2026-03-31 01:47:58.042258 | orchestrator | 2026-03-31 01:47:58.042269 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-31 01:47:58.079378 | orchestrator | skipping: [testbed-manager] 2026-03-31 01:47:58.079442 | orchestrator | 2026-03-31 01:47:58.079450 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-31 01:47:58.158192 | orchestrator | skipping: [testbed-manager] 2026-03-31 01:47:58.158295 | orchestrator | 2026-03-31 01:47:58.158311 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-31 01:47:59.016127 | orchestrator | ok: [testbed-manager] 2026-03-31 01:47:59.016165 | orchestrator | 2026-03-31 01:47:59.016170 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-31 01:47:59.016176 | orchestrator | 2026-03-31 01:47:59.016181 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-31 01:48:00.578717 | orchestrator | ok: [testbed-manager] 2026-03-31 01:48:00.578752 | orchestrator | 2026-03-31 01:48:00.578758 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-03-31 01:48:01.652953 | orchestrator | changed: [testbed-manager] 2026-03-31 01:48:01.653002 | orchestrator | 2026-03-31 01:48:01.653010 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 01:48:01.653017 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=10 rescued=0 ignored=0 2026-03-31 01:48:01.653022 | orchestrator | 2026-03-31 01:48:01.995926 | orchestrator | ok: Runtime: 0:07:25.297829 2026-03-31 01:48:02.005980 | 2026-03-31 01:48:02.006107 | TASK [Point out that the log in on the manager is now possible] 2026-03-31 01:48:02.038557 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-03-31 01:48:02.049482 | 2026-03-31 01:48:02.049619 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-31 01:48:02.089473 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-03-31 01:48:02.100135 | 2026-03-31 01:48:02.100302 | TASK [Run manager part 1 + 2] 2026-03-31 01:48:02.975005 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-31 01:48:03.059866 | orchestrator | 2026-03-31 01:48:03.059922 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-03-31 01:48:03.059930 | orchestrator | 2026-03-31 01:48:03.059944 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-31 01:48:06.159600 | orchestrator | ok: [testbed-manager] 2026-03-31 01:48:06.159663 | orchestrator | 2026-03-31 01:48:06.159685 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-31 01:48:06.194993 | orchestrator | skipping: [testbed-manager] 2026-03-31 01:48:06.195051 | orchestrator | 2026-03-31 01:48:06.195061 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-31 01:48:06.236688 | orchestrator | ok: [testbed-manager] 2026-03-31 01:48:06.236789 | orchestrator | 2026-03-31 01:48:06.236808 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-31 01:48:06.288753 | orchestrator | ok: [testbed-manager] 2026-03-31 01:48:06.288817 | orchestrator | 2026-03-31 01:48:06.288826 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-31 01:48:06.352452 | orchestrator | ok: [testbed-manager] 2026-03-31 01:48:06.352514 | orchestrator | 2026-03-31 01:48:06.352523 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-31 01:48:06.421080 | orchestrator | ok: [testbed-manager] 2026-03-31 01:48:06.421168 | orchestrator | 2026-03-31 01:48:06.421186 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-31 01:48:06.473657 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-03-31 01:48:06.473749 | orchestrator | 2026-03-31 01:48:06.473773 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-31 01:48:07.226645 | orchestrator | ok: [testbed-manager] 2026-03-31 01:48:07.226785 | orchestrator | 2026-03-31 01:48:07.226796 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-31 01:48:07.277128 | orchestrator | skipping: [testbed-manager] 2026-03-31 01:48:07.277227 | orchestrator | 2026-03-31 01:48:07.277245 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-31 01:48:08.694271 | orchestrator | changed: [testbed-manager] 2026-03-31 01:48:08.694365 | orchestrator | 2026-03-31 01:48:08.694382 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-31 01:48:09.306248 | orchestrator | ok: [testbed-manager] 2026-03-31 01:48:09.306368 | orchestrator | 2026-03-31 01:48:09.306397 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-31 01:48:10.534469 | orchestrator | changed: [testbed-manager] 2026-03-31 01:48:10.534536 | orchestrator | 2026-03-31 01:48:10.534553 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-31 01:48:27.200520 | orchestrator | changed: [testbed-manager] 2026-03-31 01:48:27.200626 | orchestrator | 2026-03-31 01:48:27.200641 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-31 01:48:27.915429 | orchestrator | ok: [testbed-manager] 2026-03-31 01:48:27.915520 | orchestrator | 2026-03-31 01:48:27.915536 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-31 01:48:27.972756 | orchestrator | skipping: [testbed-manager] 2026-03-31 01:48:27.972848 | orchestrator | 2026-03-31 01:48:27.972864 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-03-31 01:48:28.944189 | orchestrator | changed: [testbed-manager] 2026-03-31 01:48:28.944235 | orchestrator | 2026-03-31 01:48:28.944244 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-03-31 01:48:29.964692 | orchestrator | changed: [testbed-manager] 2026-03-31 01:48:29.964786 | orchestrator | 2026-03-31 01:48:29.964803 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-03-31 01:48:30.566383 | orchestrator | changed: [testbed-manager] 2026-03-31 01:48:30.566426 | orchestrator | 2026-03-31 01:48:30.566434 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-03-31 01:48:30.598670 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-31 01:48:30.598774 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-31 01:48:30.598789 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-31 01:48:30.598800 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-31 01:48:32.741852 | orchestrator | changed: [testbed-manager] 2026-03-31 01:48:32.741982 | orchestrator | 2026-03-31 01:48:32.742003 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-03-31 01:48:42.215318 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-03-31 01:48:42.215370 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-03-31 01:48:42.215382 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-03-31 01:48:42.215392 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-03-31 01:48:42.215404 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-03-31 01:48:42.215413 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-03-31 01:48:42.215422 | orchestrator | 2026-03-31 01:48:42.215432 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-03-31 01:48:43.309413 | orchestrator | changed: [testbed-manager] 2026-03-31 01:48:43.309503 | orchestrator | 2026-03-31 01:48:43.309525 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-03-31 01:48:46.667990 | orchestrator | changed: [testbed-manager] 2026-03-31 01:48:46.668081 | orchestrator | 2026-03-31 01:48:46.668095 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-03-31 01:48:46.711074 | orchestrator | skipping: [testbed-manager] 2026-03-31 01:48:46.711188 | orchestrator | 2026-03-31 01:48:46.711206 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-03-31 01:50:35.817913 | orchestrator | changed: [testbed-manager] 2026-03-31 01:50:35.817970 | orchestrator | 2026-03-31 01:50:35.817978 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-31 01:50:37.084918 | orchestrator | ok: [testbed-manager] 2026-03-31 01:50:37.084962 | orchestrator | 2026-03-31 01:50:37.084970 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 01:50:37.084977 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 2026-03-31 01:50:37.084982 | orchestrator | 2026-03-31 01:50:37.267591 | orchestrator | ok: Runtime: 0:02:34.780600 2026-03-31 01:50:37.277548 | 2026-03-31 01:50:37.277698 | TASK [Reboot manager] 2026-03-31 01:50:38.820304 | orchestrator | ok: Runtime: 0:00:01.045647 2026-03-31 01:50:38.836643 | 2026-03-31 01:50:38.836819 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-31 01:50:55.308679 | orchestrator | ok 2026-03-31 01:50:55.320454 | 2026-03-31 01:50:55.320630 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-31 01:51:55.370224 | orchestrator | ok 2026-03-31 01:51:55.378781 | 2026-03-31 01:51:55.378940 | TASK [Deploy manager + bootstrap nodes] 2026-03-31 01:51:58.022268 | orchestrator | 2026-03-31 01:51:58.022635 | orchestrator | # DEPLOY MANAGER 2026-03-31 01:51:58.022664 | orchestrator | 2026-03-31 01:51:58.022679 | orchestrator | + set -e 2026-03-31 01:51:58.022692 | orchestrator | + echo 2026-03-31 01:51:58.022706 | orchestrator | + echo '# DEPLOY MANAGER' 2026-03-31 01:51:58.022723 | orchestrator | + echo 2026-03-31 01:51:58.022797 | orchestrator | + cat /opt/manager-vars.sh 2026-03-31 01:51:58.026345 | orchestrator | export NUMBER_OF_NODES=6 2026-03-31 01:51:58.026418 | orchestrator | 2026-03-31 01:51:58.026433 | orchestrator | export CEPH_VERSION=reef 2026-03-31 01:51:58.026447 | orchestrator | export CONFIGURATION_VERSION=main 2026-03-31 01:51:58.026460 | orchestrator | export MANAGER_VERSION=9.5.0 2026-03-31 01:51:58.026496 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-03-31 01:51:58.026513 | orchestrator | 2026-03-31 01:51:58.026538 | orchestrator | export ARA=false 2026-03-31 01:51:58.026554 | orchestrator | export DEPLOY_MODE=manager 2026-03-31 01:51:58.026577 | orchestrator | export TEMPEST=false 2026-03-31 01:51:58.026595 | orchestrator | export IS_ZUUL=true 2026-03-31 01:51:58.026611 | orchestrator | 2026-03-31 01:51:58.026633 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.240 2026-03-31 01:51:58.026651 | orchestrator | export EXTERNAL_API=false 2026-03-31 01:51:58.026667 | orchestrator | 2026-03-31 01:51:58.026683 | orchestrator | export IMAGE_USER=ubuntu 2026-03-31 01:51:58.026706 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-03-31 01:51:58.026721 | orchestrator | 2026-03-31 01:51:58.026736 | orchestrator | export CEPH_STACK=ceph-ansible 2026-03-31 01:51:58.026766 | orchestrator | 2026-03-31 01:51:58.026783 | orchestrator | + echo 2026-03-31 01:51:58.026804 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-31 01:51:58.027585 | orchestrator | ++ export INTERACTIVE=false 2026-03-31 01:51:58.027615 | orchestrator | ++ INTERACTIVE=false 2026-03-31 01:51:58.027633 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-31 01:51:58.027650 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-31 01:51:58.027792 | orchestrator | + source /opt/manager-vars.sh 2026-03-31 01:51:58.027834 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-31 01:51:58.027862 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-31 01:51:58.027878 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-31 01:51:58.027999 | orchestrator | ++ CEPH_VERSION=reef 2026-03-31 01:51:58.028025 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-31 01:51:58.028042 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-31 01:51:58.028057 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-31 01:51:58.028072 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-31 01:51:58.028088 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-31 01:51:58.028118 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-31 01:51:58.028134 | orchestrator | ++ export ARA=false 2026-03-31 01:51:58.028149 | orchestrator | ++ ARA=false 2026-03-31 01:51:58.028165 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-31 01:51:58.028185 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-31 01:51:58.028200 | orchestrator | ++ export TEMPEST=false 2026-03-31 01:51:58.028215 | orchestrator | ++ TEMPEST=false 2026-03-31 01:51:58.028230 | orchestrator | ++ export IS_ZUUL=true 2026-03-31 01:51:58.028245 | orchestrator | ++ IS_ZUUL=true 2026-03-31 01:51:58.028260 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.240 2026-03-31 01:51:58.028275 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.240 2026-03-31 01:51:58.028289 | orchestrator | ++ export EXTERNAL_API=false 2026-03-31 01:51:58.028304 | orchestrator | ++ EXTERNAL_API=false 2026-03-31 01:51:58.028342 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-31 01:51:58.028359 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-31 01:51:58.028377 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-31 01:51:58.028392 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-31 01:51:58.028408 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-31 01:51:58.028423 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-31 01:51:58.028438 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-03-31 01:51:58.090803 | orchestrator | + docker version 2026-03-31 01:51:58.205936 | orchestrator | Client: Docker Engine - Community 2026-03-31 01:51:58.206158 | orchestrator | Version: 27.5.1 2026-03-31 01:51:58.206189 | orchestrator | API version: 1.47 2026-03-31 01:51:58.206208 | orchestrator | Go version: go1.22.11 2026-03-31 01:51:58.206226 | orchestrator | Git commit: 9f9e405 2026-03-31 01:51:58.206244 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-31 01:51:58.206266 | orchestrator | OS/Arch: linux/amd64 2026-03-31 01:51:58.206284 | orchestrator | Context: default 2026-03-31 01:51:58.206304 | orchestrator | 2026-03-31 01:51:58.206371 | orchestrator | Server: Docker Engine - Community 2026-03-31 01:51:58.206383 | orchestrator | Engine: 2026-03-31 01:51:58.206396 | orchestrator | Version: 27.5.1 2026-03-31 01:51:58.206408 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-03-31 01:51:58.206458 | orchestrator | Go version: go1.22.11 2026-03-31 01:51:58.206470 | orchestrator | Git commit: 4c9b3b0 2026-03-31 01:51:58.206481 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-31 01:51:58.206492 | orchestrator | OS/Arch: linux/amd64 2026-03-31 01:51:58.206502 | orchestrator | Experimental: false 2026-03-31 01:51:58.206513 | orchestrator | containerd: 2026-03-31 01:51:58.206531 | orchestrator | Version: v2.2.2 2026-03-31 01:51:58.206549 | orchestrator | GitCommit: 301b2dac98f15c27117da5c8af12118a041a31d9 2026-03-31 01:51:58.206568 | orchestrator | runc: 2026-03-31 01:51:58.206585 | orchestrator | Version: 1.3.4 2026-03-31 01:51:58.206603 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-03-31 01:51:58.206621 | orchestrator | docker-init: 2026-03-31 01:51:58.206660 | orchestrator | Version: 0.19.0 2026-03-31 01:51:58.206683 | orchestrator | GitCommit: de40ad0 2026-03-31 01:51:58.209924 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-03-31 01:51:58.218641 | orchestrator | + set -e 2026-03-31 01:51:58.218757 | orchestrator | + source /opt/manager-vars.sh 2026-03-31 01:51:58.218782 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-31 01:51:58.218799 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-31 01:51:58.218815 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-31 01:51:58.218832 | orchestrator | ++ CEPH_VERSION=reef 2026-03-31 01:51:58.218850 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-31 01:51:58.218870 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-31 01:51:58.218888 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-31 01:51:58.218906 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-31 01:51:58.218924 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-31 01:51:58.218943 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-31 01:51:58.218961 | orchestrator | ++ export ARA=false 2026-03-31 01:51:58.218980 | orchestrator | ++ ARA=false 2026-03-31 01:51:58.218998 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-31 01:51:58.219016 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-31 01:51:58.219034 | orchestrator | ++ export TEMPEST=false 2026-03-31 01:51:58.219050 | orchestrator | ++ TEMPEST=false 2026-03-31 01:51:58.219068 | orchestrator | ++ export IS_ZUUL=true 2026-03-31 01:51:58.219085 | orchestrator | ++ IS_ZUUL=true 2026-03-31 01:51:58.219102 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.240 2026-03-31 01:51:58.219122 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.240 2026-03-31 01:51:58.219141 | orchestrator | ++ export EXTERNAL_API=false 2026-03-31 01:51:58.219159 | orchestrator | ++ EXTERNAL_API=false 2026-03-31 01:51:58.219177 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-31 01:51:58.219195 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-31 01:51:58.219215 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-31 01:51:58.219234 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-31 01:51:58.219252 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-31 01:51:58.219270 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-31 01:51:58.219289 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-31 01:51:58.219307 | orchestrator | ++ export INTERACTIVE=false 2026-03-31 01:51:58.219376 | orchestrator | ++ INTERACTIVE=false 2026-03-31 01:51:58.219395 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-31 01:51:58.219422 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-31 01:51:58.219457 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-31 01:51:58.219476 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-03-31 01:51:58.224042 | orchestrator | + set -e 2026-03-31 01:51:58.224103 | orchestrator | + VERSION=9.5.0 2026-03-31 01:51:58.224118 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-03-31 01:51:58.234399 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-31 01:51:58.234464 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-31 01:51:58.239174 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-31 01:51:58.244676 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-03-31 01:51:58.255156 | orchestrator | /opt/configuration ~ 2026-03-31 01:51:58.255216 | orchestrator | + set -e 2026-03-31 01:51:58.255230 | orchestrator | + pushd /opt/configuration 2026-03-31 01:51:58.255242 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-31 01:51:58.257251 | orchestrator | + source /opt/venv/bin/activate 2026-03-31 01:51:58.258836 | orchestrator | ++ deactivate nondestructive 2026-03-31 01:51:58.258926 | orchestrator | ++ '[' -n '' ']' 2026-03-31 01:51:58.258947 | orchestrator | ++ '[' -n '' ']' 2026-03-31 01:51:58.258987 | orchestrator | ++ hash -r 2026-03-31 01:51:58.259000 | orchestrator | ++ '[' -n '' ']' 2026-03-31 01:51:58.259011 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-31 01:51:58.259022 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-31 01:51:58.259033 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-31 01:51:58.259045 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-31 01:51:58.259068 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-31 01:51:58.259080 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-31 01:51:58.259091 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-31 01:51:58.259103 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-31 01:51:58.259114 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-31 01:51:58.259125 | orchestrator | ++ export PATH 2026-03-31 01:51:58.259137 | orchestrator | ++ '[' -n '' ']' 2026-03-31 01:51:58.259148 | orchestrator | ++ '[' -z '' ']' 2026-03-31 01:51:58.259159 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-31 01:51:58.259174 | orchestrator | ++ PS1='(venv) ' 2026-03-31 01:51:58.259185 | orchestrator | ++ export PS1 2026-03-31 01:51:58.259196 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-31 01:51:58.259207 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-31 01:51:58.259218 | orchestrator | ++ hash -r 2026-03-31 01:51:58.259421 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-03-31 01:51:59.656232 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-03-31 01:51:59.657421 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.33.1) 2026-03-31 01:51:59.659141 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-03-31 01:51:59.661423 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-03-31 01:51:59.663497 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-03-31 01:51:59.678297 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-03-31 01:51:59.679560 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-03-31 01:51:59.680854 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-03-31 01:51:59.682792 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-03-31 01:51:59.716179 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.6) 2026-03-31 01:51:59.717754 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-03-31 01:51:59.719740 | orchestrator | Requirement already satisfied: urllib3<3,>=1.26 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-03-31 01:51:59.721074 | orchestrator | Requirement already satisfied: certifi>=2023.5.7 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-03-31 01:51:59.724754 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-03-31 01:51:59.939006 | orchestrator | ++ which gilt 2026-03-31 01:51:59.944011 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-03-31 01:51:59.944076 | orchestrator | + /opt/venv/bin/gilt overlay 2026-03-31 01:52:00.253839 | orchestrator | osism.cfg-generics: 2026-03-31 01:52:00.425756 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-03-31 01:52:00.425868 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-03-31 01:52:00.425886 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-03-31 01:52:00.425901 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-03-31 01:52:01.452021 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-03-31 01:52:01.466503 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-03-31 01:52:01.887495 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-03-31 01:52:01.955128 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-31 01:52:01.955243 | orchestrator | + deactivate 2026-03-31 01:52:01.955260 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-31 01:52:01.955274 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-31 01:52:01.955285 | orchestrator | + export PATH 2026-03-31 01:52:01.955297 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-31 01:52:01.955309 | orchestrator | + '[' -n '' ']' 2026-03-31 01:52:01.955375 | orchestrator | + hash -r 2026-03-31 01:52:01.955388 | orchestrator | + '[' -n '' ']' 2026-03-31 01:52:01.955404 | orchestrator | + unset VIRTUAL_ENV 2026-03-31 01:52:01.955431 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-31 01:52:01.955455 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-31 01:52:01.955473 | orchestrator | + unset -f deactivate 2026-03-31 01:52:01.955492 | orchestrator | + popd 2026-03-31 01:52:01.955534 | orchestrator | ~ 2026-03-31 01:52:01.957400 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-31 01:52:01.957452 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-03-31 01:52:01.958239 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-31 01:52:02.018514 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-31 01:52:02.018623 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-03-31 01:52:02.019950 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-31 01:52:02.089713 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-31 01:52:02.090108 | orchestrator | ++ semver 2024.2 2025.1 2026-03-31 01:52:02.160049 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-31 01:52:02.160164 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-03-31 01:52:02.254664 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-31 01:52:02.254773 | orchestrator | + source /opt/venv/bin/activate 2026-03-31 01:52:02.254788 | orchestrator | ++ deactivate nondestructive 2026-03-31 01:52:02.254801 | orchestrator | ++ '[' -n '' ']' 2026-03-31 01:52:02.254813 | orchestrator | ++ '[' -n '' ']' 2026-03-31 01:52:02.254824 | orchestrator | ++ hash -r 2026-03-31 01:52:02.254835 | orchestrator | ++ '[' -n '' ']' 2026-03-31 01:52:02.254846 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-31 01:52:02.254857 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-31 01:52:02.254868 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-31 01:52:02.254880 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-31 01:52:02.254891 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-31 01:52:02.254903 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-31 01:52:02.254914 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-31 01:52:02.254925 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-31 01:52:02.254963 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-31 01:52:02.254975 | orchestrator | ++ export PATH 2026-03-31 01:52:02.254987 | orchestrator | ++ '[' -n '' ']' 2026-03-31 01:52:02.254998 | orchestrator | ++ '[' -z '' ']' 2026-03-31 01:52:02.255008 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-31 01:52:02.255019 | orchestrator | ++ PS1='(venv) ' 2026-03-31 01:52:02.255030 | orchestrator | ++ export PS1 2026-03-31 01:52:02.255041 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-31 01:52:02.255052 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-31 01:52:02.255063 | orchestrator | ++ hash -r 2026-03-31 01:52:02.255074 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-03-31 01:52:03.603968 | orchestrator | 2026-03-31 01:52:03.604062 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-03-31 01:52:03.604074 | orchestrator | 2026-03-31 01:52:03.604081 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-31 01:52:04.245617 | orchestrator | ok: [testbed-manager] 2026-03-31 01:52:04.245727 | orchestrator | 2026-03-31 01:52:04.245744 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-31 01:52:05.411207 | orchestrator | changed: [testbed-manager] 2026-03-31 01:52:05.411313 | orchestrator | 2026-03-31 01:52:05.411361 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-03-31 01:52:05.411412 | orchestrator | 2026-03-31 01:52:05.411422 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-31 01:52:07.910477 | orchestrator | ok: [testbed-manager] 2026-03-31 01:52:07.910605 | orchestrator | 2026-03-31 01:52:07.910631 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-03-31 01:52:07.980240 | orchestrator | ok: [testbed-manager] 2026-03-31 01:52:07.980424 | orchestrator | 2026-03-31 01:52:07.980450 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-03-31 01:52:08.503175 | orchestrator | changed: [testbed-manager] 2026-03-31 01:52:08.503279 | orchestrator | 2026-03-31 01:52:08.503298 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-03-31 01:52:08.543967 | orchestrator | skipping: [testbed-manager] 2026-03-31 01:52:08.544059 | orchestrator | 2026-03-31 01:52:08.544071 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-31 01:52:08.912238 | orchestrator | changed: [testbed-manager] 2026-03-31 01:52:08.912419 | orchestrator | 2026-03-31 01:52:08.912450 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-03-31 01:52:09.275816 | orchestrator | ok: [testbed-manager] 2026-03-31 01:52:09.275917 | orchestrator | 2026-03-31 01:52:09.275933 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-03-31 01:52:09.410462 | orchestrator | skipping: [testbed-manager] 2026-03-31 01:52:09.410563 | orchestrator | 2026-03-31 01:52:09.410581 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-03-31 01:52:09.410594 | orchestrator | 2026-03-31 01:52:09.410606 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-31 01:52:11.438896 | orchestrator | ok: [testbed-manager] 2026-03-31 01:52:11.439000 | orchestrator | 2026-03-31 01:52:11.439015 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-03-31 01:52:11.567581 | orchestrator | included: osism.services.traefik for testbed-manager 2026-03-31 01:52:11.567710 | orchestrator | 2026-03-31 01:52:11.567740 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-03-31 01:52:11.650092 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-03-31 01:52:11.650199 | orchestrator | 2026-03-31 01:52:11.650218 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-03-31 01:52:12.811952 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-03-31 01:52:12.812059 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-03-31 01:52:12.812074 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-03-31 01:52:12.812086 | orchestrator | 2026-03-31 01:52:12.812102 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-03-31 01:52:14.840798 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-03-31 01:52:14.840899 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-03-31 01:52:14.840912 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-03-31 01:52:14.840921 | orchestrator | 2026-03-31 01:52:14.840929 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-03-31 01:52:15.570892 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-31 01:52:15.571035 | orchestrator | changed: [testbed-manager] 2026-03-31 01:52:15.571052 | orchestrator | 2026-03-31 01:52:15.571065 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-03-31 01:52:16.254766 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-31 01:52:16.254889 | orchestrator | changed: [testbed-manager] 2026-03-31 01:52:16.254907 | orchestrator | 2026-03-31 01:52:16.254928 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-03-31 01:52:16.317025 | orchestrator | skipping: [testbed-manager] 2026-03-31 01:52:16.317115 | orchestrator | 2026-03-31 01:52:16.317127 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-03-31 01:52:16.697628 | orchestrator | ok: [testbed-manager] 2026-03-31 01:52:16.697736 | orchestrator | 2026-03-31 01:52:16.697753 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-03-31 01:52:16.792232 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-03-31 01:52:16.792335 | orchestrator | 2026-03-31 01:52:16.792427 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-03-31 01:52:18.021326 | orchestrator | changed: [testbed-manager] 2026-03-31 01:52:18.021473 | orchestrator | 2026-03-31 01:52:18.021499 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-03-31 01:52:18.873252 | orchestrator | changed: [testbed-manager] 2026-03-31 01:52:18.873400 | orchestrator | 2026-03-31 01:52:18.873419 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-03-31 01:52:30.417745 | orchestrator | changed: [testbed-manager] 2026-03-31 01:52:30.417872 | orchestrator | 2026-03-31 01:52:30.417891 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-03-31 01:52:30.483769 | orchestrator | skipping: [testbed-manager] 2026-03-31 01:52:30.483889 | orchestrator | 2026-03-31 01:52:30.483939 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-03-31 01:52:30.483958 | orchestrator | 2026-03-31 01:52:30.483975 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-31 01:52:32.359867 | orchestrator | ok: [testbed-manager] 2026-03-31 01:52:32.359999 | orchestrator | 2026-03-31 01:52:32.360017 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-03-31 01:52:32.481894 | orchestrator | included: osism.services.manager for testbed-manager 2026-03-31 01:52:32.481999 | orchestrator | 2026-03-31 01:52:32.482077 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-03-31 01:52:32.543262 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-03-31 01:52:32.543411 | orchestrator | 2026-03-31 01:52:32.543431 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-03-31 01:52:35.188360 | orchestrator | ok: [testbed-manager] 2026-03-31 01:52:35.188527 | orchestrator | 2026-03-31 01:52:35.188544 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-03-31 01:52:35.249135 | orchestrator | ok: [testbed-manager] 2026-03-31 01:52:35.249232 | orchestrator | 2026-03-31 01:52:35.249248 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-03-31 01:52:35.391631 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-03-31 01:52:35.391732 | orchestrator | 2026-03-31 01:52:35.391747 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-03-31 01:52:38.427936 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-03-31 01:52:38.428053 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-03-31 01:52:38.428071 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-03-31 01:52:38.428084 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-03-31 01:52:38.428096 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-03-31 01:52:38.428107 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-03-31 01:52:38.428118 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-03-31 01:52:38.428129 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-03-31 01:52:38.428140 | orchestrator | 2026-03-31 01:52:38.428153 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-03-31 01:52:39.083300 | orchestrator | changed: [testbed-manager] 2026-03-31 01:52:39.083479 | orchestrator | 2026-03-31 01:52:39.083501 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-03-31 01:52:39.749245 | orchestrator | changed: [testbed-manager] 2026-03-31 01:52:39.749353 | orchestrator | 2026-03-31 01:52:39.749449 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-03-31 01:52:39.836420 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-03-31 01:52:39.836507 | orchestrator | 2026-03-31 01:52:39.836521 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-03-31 01:52:41.156013 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-03-31 01:52:41.156093 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-03-31 01:52:41.156099 | orchestrator | 2026-03-31 01:52:41.156105 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-03-31 01:52:41.835951 | orchestrator | changed: [testbed-manager] 2026-03-31 01:52:41.836077 | orchestrator | 2026-03-31 01:52:41.836095 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-03-31 01:52:41.891242 | orchestrator | skipping: [testbed-manager] 2026-03-31 01:52:41.891338 | orchestrator | 2026-03-31 01:52:41.891355 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-03-31 01:52:41.978588 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-03-31 01:52:41.978686 | orchestrator | 2026-03-31 01:52:41.978701 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-03-31 01:52:42.641876 | orchestrator | changed: [testbed-manager] 2026-03-31 01:52:42.641987 | orchestrator | 2026-03-31 01:52:42.642004 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-03-31 01:52:42.717006 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-03-31 01:52:42.717106 | orchestrator | 2026-03-31 01:52:42.717122 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-03-31 01:52:44.226990 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-31 01:52:44.227079 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-31 01:52:44.227088 | orchestrator | changed: [testbed-manager] 2026-03-31 01:52:44.227096 | orchestrator | 2026-03-31 01:52:44.227103 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-03-31 01:52:44.879029 | orchestrator | changed: [testbed-manager] 2026-03-31 01:52:44.879113 | orchestrator | 2026-03-31 01:52:44.879124 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-03-31 01:52:44.922242 | orchestrator | skipping: [testbed-manager] 2026-03-31 01:52:44.922342 | orchestrator | 2026-03-31 01:52:44.922359 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-03-31 01:52:45.044609 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-03-31 01:52:45.044698 | orchestrator | 2026-03-31 01:52:45.044711 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-03-31 01:52:45.611236 | orchestrator | changed: [testbed-manager] 2026-03-31 01:52:45.611370 | orchestrator | 2026-03-31 01:52:45.611457 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-03-31 01:52:46.040557 | orchestrator | changed: [testbed-manager] 2026-03-31 01:52:46.040682 | orchestrator | 2026-03-31 01:52:46.040710 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-03-31 01:52:47.459083 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-03-31 01:52:47.459202 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-03-31 01:52:47.459220 | orchestrator | 2026-03-31 01:52:47.459233 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-03-31 01:52:48.155697 | orchestrator | changed: [testbed-manager] 2026-03-31 01:52:48.155795 | orchestrator | 2026-03-31 01:52:48.155810 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-03-31 01:52:48.566675 | orchestrator | ok: [testbed-manager] 2026-03-31 01:52:48.567624 | orchestrator | 2026-03-31 01:52:48.567680 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-03-31 01:52:48.950121 | orchestrator | changed: [testbed-manager] 2026-03-31 01:52:48.950222 | orchestrator | 2026-03-31 01:52:48.950238 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-03-31 01:52:49.000163 | orchestrator | skipping: [testbed-manager] 2026-03-31 01:52:49.000265 | orchestrator | 2026-03-31 01:52:49.000284 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-03-31 01:52:49.083539 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-03-31 01:52:49.083658 | orchestrator | 2026-03-31 01:52:49.083673 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-03-31 01:52:49.132737 | orchestrator | ok: [testbed-manager] 2026-03-31 01:52:49.132859 | orchestrator | 2026-03-31 01:52:49.132884 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-03-31 01:52:51.315946 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-03-31 01:52:51.316024 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-03-31 01:52:51.316032 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-03-31 01:52:51.316038 | orchestrator | 2026-03-31 01:52:51.316044 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-03-31 01:52:52.097342 | orchestrator | changed: [testbed-manager] 2026-03-31 01:52:52.097494 | orchestrator | 2026-03-31 01:52:52.097513 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-03-31 01:52:52.840026 | orchestrator | changed: [testbed-manager] 2026-03-31 01:52:52.840128 | orchestrator | 2026-03-31 01:52:52.840145 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-03-31 01:52:53.567472 | orchestrator | changed: [testbed-manager] 2026-03-31 01:52:53.568278 | orchestrator | 2026-03-31 01:52:53.568309 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-03-31 01:52:53.639677 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-03-31 01:52:53.639774 | orchestrator | 2026-03-31 01:52:53.639792 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-03-31 01:52:53.695840 | orchestrator | ok: [testbed-manager] 2026-03-31 01:52:53.695938 | orchestrator | 2026-03-31 01:52:53.695953 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-03-31 01:52:54.454927 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-03-31 01:52:54.455031 | orchestrator | 2026-03-31 01:52:54.455054 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-03-31 01:52:54.552566 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-03-31 01:52:54.552676 | orchestrator | 2026-03-31 01:52:54.552693 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-03-31 01:52:55.312503 | orchestrator | changed: [testbed-manager] 2026-03-31 01:52:55.312634 | orchestrator | 2026-03-31 01:52:55.312663 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-03-31 01:52:55.974289 | orchestrator | ok: [testbed-manager] 2026-03-31 01:52:55.974392 | orchestrator | 2026-03-31 01:52:55.974463 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-03-31 01:52:56.038257 | orchestrator | skipping: [testbed-manager] 2026-03-31 01:52:56.038349 | orchestrator | 2026-03-31 01:52:56.038363 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-03-31 01:52:56.128485 | orchestrator | ok: [testbed-manager] 2026-03-31 01:52:56.128581 | orchestrator | 2026-03-31 01:52:56.128596 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-03-31 01:52:56.957210 | orchestrator | changed: [testbed-manager] 2026-03-31 01:52:56.957326 | orchestrator | 2026-03-31 01:52:56.957350 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-03-31 01:54:12.474633 | orchestrator | changed: [testbed-manager] 2026-03-31 01:54:12.474735 | orchestrator | 2026-03-31 01:54:12.474748 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-03-31 01:54:13.526216 | orchestrator | ok: [testbed-manager] 2026-03-31 01:54:13.526335 | orchestrator | 2026-03-31 01:54:13.526363 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-03-31 01:54:13.584480 | orchestrator | skipping: [testbed-manager] 2026-03-31 01:54:13.584702 | orchestrator | 2026-03-31 01:54:13.584723 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-03-31 01:54:16.873405 | orchestrator | changed: [testbed-manager] 2026-03-31 01:54:16.873493 | orchestrator | 2026-03-31 01:54:16.873503 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-03-31 01:54:16.933913 | orchestrator | ok: [testbed-manager] 2026-03-31 01:54:16.934076 | orchestrator | 2026-03-31 01:54:16.934106 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-31 01:54:16.934128 | orchestrator | 2026-03-31 01:54:16.934144 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-03-31 01:54:17.115237 | orchestrator | skipping: [testbed-manager] 2026-03-31 01:54:17.115410 | orchestrator | 2026-03-31 01:54:17.115457 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-03-31 01:55:17.178646 | orchestrator | Pausing for 60 seconds 2026-03-31 01:55:17.178763 | orchestrator | changed: [testbed-manager] 2026-03-31 01:55:17.178778 | orchestrator | 2026-03-31 01:55:17.178791 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-03-31 01:55:20.393210 | orchestrator | changed: [testbed-manager] 2026-03-31 01:55:20.393341 | orchestrator | 2026-03-31 01:55:20.393366 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-03-31 01:56:22.541201 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-03-31 01:56:22.541299 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-03-31 01:56:22.541324 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-03-31 01:56:22.541331 | orchestrator | changed: [testbed-manager] 2026-03-31 01:56:22.541338 | orchestrator | 2026-03-31 01:56:22.541344 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-03-31 01:56:34.051908 | orchestrator | changed: [testbed-manager] 2026-03-31 01:56:34.052056 | orchestrator | 2026-03-31 01:56:34.052076 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-03-31 01:56:34.145235 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-03-31 01:56:34.145333 | orchestrator | 2026-03-31 01:56:34.145348 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-31 01:56:34.145361 | orchestrator | 2026-03-31 01:56:34.145455 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-03-31 01:56:34.210744 | orchestrator | skipping: [testbed-manager] 2026-03-31 01:56:34.210843 | orchestrator | 2026-03-31 01:56:34.210861 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-03-31 01:56:34.287195 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-03-31 01:56:34.287297 | orchestrator | 2026-03-31 01:56:34.287312 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-03-31 01:56:35.112589 | orchestrator | changed: [testbed-manager] 2026-03-31 01:56:35.112745 | orchestrator | 2026-03-31 01:56:35.112763 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-03-31 01:56:38.561284 | orchestrator | ok: [testbed-manager] 2026-03-31 01:56:38.561405 | orchestrator | 2026-03-31 01:56:38.561426 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-03-31 01:56:38.642852 | orchestrator | ok: [testbed-manager] => { 2026-03-31 01:56:38.642951 | orchestrator | "version_check_result.stdout_lines": [ 2026-03-31 01:56:38.642969 | orchestrator | "=== OSISM Container Version Check ===", 2026-03-31 01:56:38.642981 | orchestrator | "Checking running containers against expected versions...", 2026-03-31 01:56:38.642994 | orchestrator | "", 2026-03-31 01:56:38.643006 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-03-31 01:56:38.643018 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-03-31 01:56:38.643030 | orchestrator | " Enabled: true", 2026-03-31 01:56:38.643042 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-03-31 01:56:38.643053 | orchestrator | " Status: ✅ MATCH", 2026-03-31 01:56:38.643064 | orchestrator | "", 2026-03-31 01:56:38.643076 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-03-31 01:56:38.643115 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-03-31 01:56:38.643127 | orchestrator | " Enabled: true", 2026-03-31 01:56:38.643138 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-03-31 01:56:38.643150 | orchestrator | " Status: ✅ MATCH", 2026-03-31 01:56:38.643161 | orchestrator | "", 2026-03-31 01:56:38.643172 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-03-31 01:56:38.643183 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-03-31 01:56:38.643194 | orchestrator | " Enabled: true", 2026-03-31 01:56:38.643205 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-03-31 01:56:38.643216 | orchestrator | " Status: ✅ MATCH", 2026-03-31 01:56:38.643227 | orchestrator | "", 2026-03-31 01:56:38.643238 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-03-31 01:56:38.643249 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-03-31 01:56:38.643260 | orchestrator | " Enabled: true", 2026-03-31 01:56:38.643271 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-03-31 01:56:38.643282 | orchestrator | " Status: ✅ MATCH", 2026-03-31 01:56:38.643293 | orchestrator | "", 2026-03-31 01:56:38.643307 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-03-31 01:56:38.643318 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-03-31 01:56:38.643329 | orchestrator | " Enabled: true", 2026-03-31 01:56:38.643340 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-03-31 01:56:38.643350 | orchestrator | " Status: ✅ MATCH", 2026-03-31 01:56:38.643361 | orchestrator | "", 2026-03-31 01:56:38.643372 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-03-31 01:56:38.643386 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-31 01:56:38.643398 | orchestrator | " Enabled: true", 2026-03-31 01:56:38.643411 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-31 01:56:38.643422 | orchestrator | " Status: ✅ MATCH", 2026-03-31 01:56:38.643435 | orchestrator | "", 2026-03-31 01:56:38.643447 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-03-31 01:56:38.643460 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-31 01:56:38.643472 | orchestrator | " Enabled: true", 2026-03-31 01:56:38.643485 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-31 01:56:38.643498 | orchestrator | " Status: ✅ MATCH", 2026-03-31 01:56:38.643511 | orchestrator | "", 2026-03-31 01:56:38.643523 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-03-31 01:56:38.643535 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-31 01:56:38.643547 | orchestrator | " Enabled: true", 2026-03-31 01:56:38.643560 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-31 01:56:38.643572 | orchestrator | " Status: ✅ MATCH", 2026-03-31 01:56:38.643585 | orchestrator | "", 2026-03-31 01:56:38.643597 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-03-31 01:56:38.643610 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-03-31 01:56:38.643622 | orchestrator | " Enabled: true", 2026-03-31 01:56:38.643635 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-03-31 01:56:38.643647 | orchestrator | " Status: ✅ MATCH", 2026-03-31 01:56:38.643659 | orchestrator | "", 2026-03-31 01:56:38.643699 | orchestrator | "Checking service: redis (Redis Cache)", 2026-03-31 01:56:38.643714 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-31 01:56:38.643727 | orchestrator | " Enabled: true", 2026-03-31 01:56:38.643739 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-31 01:56:38.643753 | orchestrator | " Status: ✅ MATCH", 2026-03-31 01:56:38.643765 | orchestrator | "", 2026-03-31 01:56:38.643776 | orchestrator | "Checking service: api (OSISM API Service)", 2026-03-31 01:56:38.643796 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-31 01:56:38.643807 | orchestrator | " Enabled: true", 2026-03-31 01:56:38.643819 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-31 01:56:38.643830 | orchestrator | " Status: ✅ MATCH", 2026-03-31 01:56:38.643841 | orchestrator | "", 2026-03-31 01:56:38.643852 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-03-31 01:56:38.643863 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-31 01:56:38.643873 | orchestrator | " Enabled: true", 2026-03-31 01:56:38.643885 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-31 01:56:38.643896 | orchestrator | " Status: ✅ MATCH", 2026-03-31 01:56:38.643907 | orchestrator | "", 2026-03-31 01:56:38.643919 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-03-31 01:56:38.643930 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-31 01:56:38.643941 | orchestrator | " Enabled: true", 2026-03-31 01:56:38.643952 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-31 01:56:38.643963 | orchestrator | " Status: ✅ MATCH", 2026-03-31 01:56:38.643974 | orchestrator | "", 2026-03-31 01:56:38.643984 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-03-31 01:56:38.643995 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-31 01:56:38.644006 | orchestrator | " Enabled: true", 2026-03-31 01:56:38.644017 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-31 01:56:38.644046 | orchestrator | " Status: ✅ MATCH", 2026-03-31 01:56:38.644057 | orchestrator | "", 2026-03-31 01:56:38.644068 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-03-31 01:56:38.644079 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-31 01:56:38.644101 | orchestrator | " Enabled: true", 2026-03-31 01:56:38.644113 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-31 01:56:38.644124 | orchestrator | " Status: ✅ MATCH", 2026-03-31 01:56:38.644135 | orchestrator | "", 2026-03-31 01:56:38.644147 | orchestrator | "=== Summary ===", 2026-03-31 01:56:38.644158 | orchestrator | "Errors (version mismatches): 0", 2026-03-31 01:56:38.644169 | orchestrator | "Warnings (expected containers not running): 0", 2026-03-31 01:56:38.644180 | orchestrator | "", 2026-03-31 01:56:38.644191 | orchestrator | "✅ All running containers match expected versions!" 2026-03-31 01:56:38.644202 | orchestrator | ] 2026-03-31 01:56:38.644213 | orchestrator | } 2026-03-31 01:56:38.644225 | orchestrator | 2026-03-31 01:56:38.644236 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-03-31 01:56:38.707866 | orchestrator | skipping: [testbed-manager] 2026-03-31 01:56:38.707955 | orchestrator | 2026-03-31 01:56:38.707968 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 01:56:38.707979 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-03-31 01:56:38.707988 | orchestrator | 2026-03-31 01:56:38.836321 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-31 01:56:38.836411 | orchestrator | + deactivate 2026-03-31 01:56:38.836424 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-31 01:56:38.836437 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-31 01:56:38.836447 | orchestrator | + export PATH 2026-03-31 01:56:38.836457 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-31 01:56:38.836468 | orchestrator | + '[' -n '' ']' 2026-03-31 01:56:38.836478 | orchestrator | + hash -r 2026-03-31 01:56:38.836488 | orchestrator | + '[' -n '' ']' 2026-03-31 01:56:38.836499 | orchestrator | + unset VIRTUAL_ENV 2026-03-31 01:56:38.836508 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-31 01:56:38.836518 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-31 01:56:38.836528 | orchestrator | + unset -f deactivate 2026-03-31 01:56:38.836539 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-03-31 01:56:38.845239 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-31 01:56:38.845333 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-31 01:56:38.845391 | orchestrator | + local max_attempts=60 2026-03-31 01:56:38.845408 | orchestrator | + local name=ceph-ansible 2026-03-31 01:56:38.845419 | orchestrator | + local attempt_num=1 2026-03-31 01:56:38.846260 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-31 01:56:38.887796 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-31 01:56:38.888935 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-31 01:56:38.888982 | orchestrator | + local max_attempts=60 2026-03-31 01:56:38.888997 | orchestrator | + local name=kolla-ansible 2026-03-31 01:56:38.889009 | orchestrator | + local attempt_num=1 2026-03-31 01:56:38.889035 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-31 01:56:38.929943 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-31 01:56:38.930056 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-31 01:56:38.930067 | orchestrator | + local max_attempts=60 2026-03-31 01:56:38.930076 | orchestrator | + local name=osism-ansible 2026-03-31 01:56:38.930083 | orchestrator | + local attempt_num=1 2026-03-31 01:56:38.930895 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-31 01:56:38.965050 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-31 01:56:38.965153 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-31 01:56:38.965169 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-31 01:56:39.758391 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-03-31 01:56:39.967491 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-03-31 01:56:39.967618 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-03-31 01:56:39.967644 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-03-31 01:56:39.967666 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-03-31 01:56:39.967748 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-03-31 01:56:39.967797 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-03-31 01:56:39.967813 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-03-31 01:56:39.967825 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-03-31 01:56:39.967836 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-03-31 01:56:39.967849 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-03-31 01:56:39.967868 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-03-31 01:56:39.967886 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-03-31 01:56:39.967904 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-03-31 01:56:39.967951 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-03-31 01:56:39.967971 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-03-31 01:56:39.967992 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-03-31 01:56:39.973508 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-31 01:56:40.023849 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-31 01:56:40.023927 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-03-31 01:56:40.027569 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-03-31 01:56:52.297871 | orchestrator | 2026-03-31 01:56:52 | INFO  | Task c189216a-8934-4a2c-a09c-7300b9d67894 (resolvconf) was prepared for execution. 2026-03-31 01:56:52.298002 | orchestrator | 2026-03-31 01:56:52 | INFO  | It takes a moment until task c189216a-8934-4a2c-a09c-7300b9d67894 (resolvconf) has been started and output is visible here. 2026-03-31 01:57:07.093001 | orchestrator | 2026-03-31 01:57:07.093107 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-03-31 01:57:07.093121 | orchestrator | 2026-03-31 01:57:07.093131 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-31 01:57:07.093140 | orchestrator | Tuesday 31 March 2026 01:56:56 +0000 (0:00:00.151) 0:00:00.151 ********* 2026-03-31 01:57:07.093149 | orchestrator | ok: [testbed-manager] 2026-03-31 01:57:07.093159 | orchestrator | 2026-03-31 01:57:07.093169 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-31 01:57:07.093178 | orchestrator | Tuesday 31 March 2026 01:57:00 +0000 (0:00:04.110) 0:00:04.262 ********* 2026-03-31 01:57:07.093187 | orchestrator | skipping: [testbed-manager] 2026-03-31 01:57:07.093197 | orchestrator | 2026-03-31 01:57:07.093206 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-31 01:57:07.093214 | orchestrator | Tuesday 31 March 2026 01:57:00 +0000 (0:00:00.060) 0:00:04.323 ********* 2026-03-31 01:57:07.093223 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-03-31 01:57:07.093233 | orchestrator | 2026-03-31 01:57:07.093242 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-31 01:57:07.093251 | orchestrator | Tuesday 31 March 2026 01:57:00 +0000 (0:00:00.090) 0:00:04.413 ********* 2026-03-31 01:57:07.093269 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-03-31 01:57:07.093278 | orchestrator | 2026-03-31 01:57:07.093287 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-31 01:57:07.093296 | orchestrator | Tuesday 31 March 2026 01:57:00 +0000 (0:00:00.083) 0:00:04.496 ********* 2026-03-31 01:57:07.093305 | orchestrator | ok: [testbed-manager] 2026-03-31 01:57:07.093314 | orchestrator | 2026-03-31 01:57:07.093323 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-31 01:57:07.093332 | orchestrator | Tuesday 31 March 2026 01:57:02 +0000 (0:00:01.213) 0:00:05.709 ********* 2026-03-31 01:57:07.093341 | orchestrator | skipping: [testbed-manager] 2026-03-31 01:57:07.093349 | orchestrator | 2026-03-31 01:57:07.093358 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-31 01:57:07.093367 | orchestrator | Tuesday 31 March 2026 01:57:02 +0000 (0:00:00.068) 0:00:05.778 ********* 2026-03-31 01:57:07.093393 | orchestrator | ok: [testbed-manager] 2026-03-31 01:57:07.093402 | orchestrator | 2026-03-31 01:57:07.093411 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-31 01:57:07.093420 | orchestrator | Tuesday 31 March 2026 01:57:02 +0000 (0:00:00.520) 0:00:06.299 ********* 2026-03-31 01:57:07.093429 | orchestrator | skipping: [testbed-manager] 2026-03-31 01:57:07.093437 | orchestrator | 2026-03-31 01:57:07.093446 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-31 01:57:07.093456 | orchestrator | Tuesday 31 March 2026 01:57:02 +0000 (0:00:00.081) 0:00:06.380 ********* 2026-03-31 01:57:07.093464 | orchestrator | changed: [testbed-manager] 2026-03-31 01:57:07.093473 | orchestrator | 2026-03-31 01:57:07.093482 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-31 01:57:07.093491 | orchestrator | Tuesday 31 March 2026 01:57:03 +0000 (0:00:00.599) 0:00:06.980 ********* 2026-03-31 01:57:07.093499 | orchestrator | changed: [testbed-manager] 2026-03-31 01:57:07.093508 | orchestrator | 2026-03-31 01:57:07.093517 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-31 01:57:07.093526 | orchestrator | Tuesday 31 March 2026 01:57:04 +0000 (0:00:01.166) 0:00:08.147 ********* 2026-03-31 01:57:07.093535 | orchestrator | ok: [testbed-manager] 2026-03-31 01:57:07.093544 | orchestrator | 2026-03-31 01:57:07.093553 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-31 01:57:07.093562 | orchestrator | Tuesday 31 March 2026 01:57:05 +0000 (0:00:01.007) 0:00:09.154 ********* 2026-03-31 01:57:07.093570 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-03-31 01:57:07.093579 | orchestrator | 2026-03-31 01:57:07.093588 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-31 01:57:07.093597 | orchestrator | Tuesday 31 March 2026 01:57:05 +0000 (0:00:00.077) 0:00:09.232 ********* 2026-03-31 01:57:07.093606 | orchestrator | changed: [testbed-manager] 2026-03-31 01:57:07.093614 | orchestrator | 2026-03-31 01:57:07.093623 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 01:57:07.093633 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-31 01:57:07.093642 | orchestrator | 2026-03-31 01:57:07.093650 | orchestrator | 2026-03-31 01:57:07.093659 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 01:57:07.093668 | orchestrator | Tuesday 31 March 2026 01:57:06 +0000 (0:00:01.185) 0:00:10.417 ********* 2026-03-31 01:57:07.093676 | orchestrator | =============================================================================== 2026-03-31 01:57:07.093685 | orchestrator | Gathering Facts --------------------------------------------------------- 4.11s 2026-03-31 01:57:07.093694 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.21s 2026-03-31 01:57:07.093724 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.19s 2026-03-31 01:57:07.093733 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.17s 2026-03-31 01:57:07.093742 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.01s 2026-03-31 01:57:07.093751 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.60s 2026-03-31 01:57:07.093774 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.52s 2026-03-31 01:57:07.093783 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2026-03-31 01:57:07.093792 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2026-03-31 01:57:07.093801 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-03-31 01:57:07.093809 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-03-31 01:57:07.093818 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2026-03-31 01:57:07.093834 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2026-03-31 01:57:07.432157 | orchestrator | + osism apply sshconfig 2026-03-31 01:57:19.540535 | orchestrator | 2026-03-31 01:57:19 | INFO  | Task 8610bf83-8905-465c-81b0-d26eb47fce34 (sshconfig) was prepared for execution. 2026-03-31 01:57:19.541970 | orchestrator | 2026-03-31 01:57:19 | INFO  | It takes a moment until task 8610bf83-8905-465c-81b0-d26eb47fce34 (sshconfig) has been started and output is visible here. 2026-03-31 01:57:31.903308 | orchestrator | 2026-03-31 01:57:31.903433 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-03-31 01:57:31.903450 | orchestrator | 2026-03-31 01:57:31.903480 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-03-31 01:57:31.903492 | orchestrator | Tuesday 31 March 2026 01:57:23 +0000 (0:00:00.171) 0:00:00.171 ********* 2026-03-31 01:57:31.903502 | orchestrator | ok: [testbed-manager] 2026-03-31 01:57:31.903513 | orchestrator | 2026-03-31 01:57:31.903523 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-03-31 01:57:31.903534 | orchestrator | Tuesday 31 March 2026 01:57:24 +0000 (0:00:00.579) 0:00:00.751 ********* 2026-03-31 01:57:31.903544 | orchestrator | changed: [testbed-manager] 2026-03-31 01:57:31.903555 | orchestrator | 2026-03-31 01:57:31.903564 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-03-31 01:57:31.903574 | orchestrator | Tuesday 31 March 2026 01:57:25 +0000 (0:00:00.545) 0:00:01.296 ********* 2026-03-31 01:57:31.903584 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-03-31 01:57:31.903594 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-03-31 01:57:31.903604 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-03-31 01:57:31.903613 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-03-31 01:57:31.903642 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-03-31 01:57:31.903652 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-03-31 01:57:31.903661 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-03-31 01:57:31.903671 | orchestrator | 2026-03-31 01:57:31.903681 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-03-31 01:57:31.903690 | orchestrator | Tuesday 31 March 2026 01:57:30 +0000 (0:00:05.960) 0:00:07.257 ********* 2026-03-31 01:57:31.903700 | orchestrator | skipping: [testbed-manager] 2026-03-31 01:57:31.903710 | orchestrator | 2026-03-31 01:57:31.903796 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-03-31 01:57:31.903807 | orchestrator | Tuesday 31 March 2026 01:57:31 +0000 (0:00:00.080) 0:00:07.338 ********* 2026-03-31 01:57:31.903817 | orchestrator | changed: [testbed-manager] 2026-03-31 01:57:31.903827 | orchestrator | 2026-03-31 01:57:31.903836 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 01:57:31.903848 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-31 01:57:31.903858 | orchestrator | 2026-03-31 01:57:31.903868 | orchestrator | 2026-03-31 01:57:31.903878 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 01:57:31.903888 | orchestrator | Tuesday 31 March 2026 01:57:31 +0000 (0:00:00.573) 0:00:07.911 ********* 2026-03-31 01:57:31.903897 | orchestrator | =============================================================================== 2026-03-31 01:57:31.903907 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.96s 2026-03-31 01:57:31.903917 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.58s 2026-03-31 01:57:31.903927 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.57s 2026-03-31 01:57:31.903937 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.55s 2026-03-31 01:57:31.903970 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2026-03-31 01:57:32.235487 | orchestrator | + osism apply known-hosts 2026-03-31 01:57:44.449236 | orchestrator | 2026-03-31 01:57:44 | INFO  | Task 42c28bb9-56f0-46dd-af3d-6a9bbd526b92 (known-hosts) was prepared for execution. 2026-03-31 01:57:44.449354 | orchestrator | 2026-03-31 01:57:44 | INFO  | It takes a moment until task 42c28bb9-56f0-46dd-af3d-6a9bbd526b92 (known-hosts) has been started and output is visible here. 2026-03-31 01:58:02.360465 | orchestrator | 2026-03-31 01:58:02.360594 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-03-31 01:58:02.360613 | orchestrator | 2026-03-31 01:58:02.360626 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-03-31 01:58:02.360638 | orchestrator | Tuesday 31 March 2026 01:57:48 +0000 (0:00:00.209) 0:00:00.209 ********* 2026-03-31 01:58:02.360650 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-31 01:58:02.360662 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-31 01:58:02.360673 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-31 01:58:02.360684 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-31 01:58:02.360695 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-31 01:58:02.360706 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-31 01:58:02.360717 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-31 01:58:02.360728 | orchestrator | 2026-03-31 01:58:02.360739 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-03-31 01:58:02.360795 | orchestrator | Tuesday 31 March 2026 01:57:55 +0000 (0:00:06.411) 0:00:06.621 ********* 2026-03-31 01:58:02.360809 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-31 01:58:02.360822 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-31 01:58:02.360833 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-31 01:58:02.360844 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-31 01:58:02.360855 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-31 01:58:02.360876 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-31 01:58:02.360888 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-31 01:58:02.360899 | orchestrator | 2026-03-31 01:58:02.360910 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-31 01:58:02.360921 | orchestrator | Tuesday 31 March 2026 01:57:55 +0000 (0:00:00.178) 0:00:06.800 ********* 2026-03-31 01:58:02.360932 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI8308BNIi6iKwrGBJr6Bf1pwzA2LYCcDaJXSRMUJ9IQPuPMgbtpY4hN9XLlPCeF2LP2RLP0G3uly6UoTPPfx2M=) 2026-03-31 01:58:02.360954 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCrHq8hIrnP4eREOWCDmcOXjngP/SBB96Y2L3kyD3DvuaLwWDWK43gTAZt5uX1uk4ybCLEaqn5EcL/WqaEmG/QLvCYhI3WRpArr6u8LmZeshaJqQ69KO2DPfLY+vz5ImAGWepqXgzgqsU7psyY3M6ig5IsuURKmx/RKG5Oyu5DxyLu4wVEP1L9487jMgNTNdTMnyWAXzoKgkt5d8JRPHvWmKS1XGodxmBiE45+UYS2xr9wlh7R3WF7KKItfIwYDh+mt79cjM1w9EnK/J9wCSm26hqexjy06fFFY9WZ12n71Zw5B10Mq5K9oLxUO9zGc4OENCQhLgAKjCM4JA3n3Y4a+/oKMQ7Lm1VhK8GfmEJcRGdgEpqW8KfZCzUn7aqLGz5XnRihhMx3vcUqA7fvkgQEqUU8mv6QY8bnHr2CDPdSXvLuuitzA0P/OJm9Yfv3Qq0zPYXR99YQjNAATviWtor7bMaUe8TMyWoyDWqaU9LqmdROuppsjeVgaYdNqFtbwY0=) 2026-03-31 01:58:02.360992 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBEmDKrrDxxWa0hoo1V3bl53r9C18c60/PljE4vTgPX4) 2026-03-31 01:58:02.361007 | orchestrator | 2026-03-31 01:58:02.361020 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-31 01:58:02.361033 | orchestrator | Tuesday 31 March 2026 01:57:56 +0000 (0:00:01.255) 0:00:08.056 ********* 2026-03-31 01:58:02.361046 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGzKZ6wIiFGyi4uqiYpyBBTRPewWZsfhQDGCKHKtGPsd0jkj1HO7Z1hKcvX+D0SmgMo4ZKOhA/9HL/oa3wk1scA=) 2026-03-31 01:58:02.361088 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCO5xEaTr+8yr16pE6sjndDkDIaKIi5TFWPZgePArKoDAeFyzsC+IVRgjifRo4ut7PJXTv+6BzofVUl//8nGr+zrf7hH9WNjtCWJJh7kybVbDjwCIx++JqDt9Er9d+PEfWzw2SfS7+iB/KEuSR8aJ7KzP8FBN0q0b3P292RMxXZBQxOEU0erqJOeeUqE6im/A6/5+MIyx1NLud2jD4jb+HM6C8Ws+655v8PFNDzaqHgt85Rjhg3xZXGVIrMm+bCn8FlITf6CsybEZVH0rs+t7h2qoPV3MrwwAhkn5F0aoglMV/1zxZ4UhtC+DjZLOi3OfL/0A4GCDldW/xUvDUOprpvqMMjDuQdfCseMZWe0xFd+QdUx8DT5ZUzNf9iQrOGUlRf7dAJvqYFs3DAUM2rAM2qE1QhxogvDxqWakJP7fsNGfR9ge8I2aksxDdOFdI/iEc1tpoBonK0fBATIK6LaWS8QMDaq136dO13ekIGbg6cNCQdrQGnVRcyTqS2cK6Go9M=) 2026-03-31 01:58:02.361104 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINj3R2mKsDUx6ei4NDgBpokK+5sThTgjOXsNa/VHdBi9) 2026-03-31 01:58:02.361117 | orchestrator | 2026-03-31 01:58:02.361129 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-31 01:58:02.361142 | orchestrator | Tuesday 31 March 2026 01:57:57 +0000 (0:00:01.111) 0:00:09.168 ********* 2026-03-31 01:58:02.361156 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCY4maVTxGhKc5R+YZKqiHD3nLd9crNlDOe88A6VUJwDY4ttZmpJIaErA77vdDGVWpsbxZUBZu1NFX3WF4khAz5LWG2TiMXlfyZkHbI7IonXUNAZ2uJKl2J2ef6QFtP/pHCPaZZw2RPgs/Be2FOdUeVgT4tDLJ1e/xnHYyUrB90o4LvQ7D0rOGisKTwuGXi3eTbXCUwEch1BPcOuPT1fekvVdD4Gi4Q/z5M5v2PdwK4QUS6B32zpFbA+81FkBuKR7Zx5K7nga3vmpaxB+ugX+rADt3mp7ik/Qp/3rfL3BqcIp3PPIFIhKqDhvnAtDC+kwwCzhI5hrgztxNA4bN9Nb8KKp3epXdVFc7tQSN1KIRGBRJCEmkn6hLBiDRVwvVcP7OUCkn5yXFJFKtAzeZM/OsxMfRSnWjbIPDMLmyBS1YdtMOAQiN5fsnkrPUafLx8PhxFuppVOUUFeiA8qvylVobB2mui6SiIbFYdBEjmmDTnH08WY+dQMdaMwV/K7SS5oVE=) 2026-03-31 01:58:02.361169 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNcrR23a8DZgygis7IV4dCrF9Nd6WeaHlWMhon8JsqHpW3l6TvPZhVV3zSdJbKf578gvoju8anTCv/CHxq8OWNg=) 2026-03-31 01:58:02.361183 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHfCEYr52/KYN8k2U9R01CLbL8NJ8IeMQ/afdbye2/fN) 2026-03-31 01:58:02.361196 | orchestrator | 2026-03-31 01:58:02.361208 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-31 01:58:02.361221 | orchestrator | Tuesday 31 March 2026 01:57:58 +0000 (0:00:01.181) 0:00:10.349 ********* 2026-03-31 01:58:02.361234 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3zTWIqfiFX8Qhc5qdVp1sEFn/x1uwqqGZqnQ/eEWoCqKzL16hqZBO+9TOW3TaoFkOp9a+f4aroCd6UMG+s+/QT/IHwyti4ZHtNX5jGbL3bJxuS3LxPfcOj838HZUBvrDrVd70hvjaWR2iTjmAxWmo5dMie3l/KpuSX3Xz1j5tusQEAvu1sTLko4sIFWSTzZ8IBEp0mfAEdGNUmwRoIJDG5b0b1/UYHMVKF9FBmrXaqoJFVH75sD8yAkJ9uXBfqCIN0JZ8Y0bWwgZoE1OEvm7+V+Ik0FhenIB8q6/a+d0Qk/txmZydzKEp7mXgQsGUS44iJu6sJPPcOJHO6/Za478D90N5lvYBnnhzBdogP+VSeuLg0uTb3Dd6hRjWdDxe7OBBInRTtW9FL5plvIoX/s3MEOtJDpIoTtpC2Nmpoj4NEsOm2NNyXU4ig3KShEJ8a9LijovSn7xU+88oiea3iv+OtgBHCHI30BcJMdqD1kUvHp1PUdHCV1cKJRrLPwcC09c=) 2026-03-31 01:58:02.361256 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMy5tV/4dj9lDS9GVmhmlPWcR/+3Ij5BsxEKdIhR2uvJ) 2026-03-31 01:58:02.361267 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHRuRlR+Ux2+QQgpVVuFaHEnlZ7CokFIj9oReREI/rRaqHkPbOheb9byKKDr9jFV4/D88PPvO4R3fRi7q64q3JE=) 2026-03-31 01:58:02.361278 | orchestrator | 2026-03-31 01:58:02.361290 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-31 01:58:02.361301 | orchestrator | Tuesday 31 March 2026 01:58:00 +0000 (0:00:01.124) 0:00:11.474 ********* 2026-03-31 01:58:02.361391 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDnkOBOqmcR2d1rrqjn5iXknAU5qcu+Suvk4ZrjYLnpDko/f39ZV5lwyHEPdborwDEtZSEv/s8pQIxS1KHsJ4q4=) 2026-03-31 01:58:02.361403 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOZtelO7ewWQTwbFWZ8nzfdt65cfTPUAw2ktH2FAsmTr) 2026-03-31 01:58:02.361414 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1xQf+wcHqny+A3lBbVVbZwG5stY06UgTfl0NJW+l6lo7IYJgymKKJr+X0fk9jA5RYgajWRDMiUCGBu81YIaP0a8ViYko1h+aAZLFPD5yJp/BpbyF5STmQAbxq37ERlETnmqq2Woh0A1XRnkhXSBcYsPIDoPzUj/2hhVJkN75ENH74tsP0DdVIuKuNY+1UGedGBmzx9X8QHZWWNSgqtsG0xysPJGPOWgTysEyAhzM3yWHo54cv//3VW11zbHTq9OwBCLkjfrIEdtYZ/ZfVxLm5C/2NZzwJoj451Kmx55WcEJzYif8O3eo2CQ57h14ExFB6vl7p8C8BdAhEEqfSDAXbzOBbAhDzrP1+HNilWbHnlWOmZisW+YwSPgMHwTLIv2+Cai50F8AgxPUr28vfgkPz4jYJR+rSbmYxjwZjNaatCDUPo7629EGbU46giSHxtDHllDdCGsSXOHVyrewvtTk3KNTEs5720c+KjB0zCGJbMBIdMrxwgEYaSoBzSIHoUnE=) 2026-03-31 01:58:02.361426 | orchestrator | 2026-03-31 01:58:02.361437 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-31 01:58:02.361448 | orchestrator | Tuesday 31 March 2026 01:58:01 +0000 (0:00:01.146) 0:00:12.620 ********* 2026-03-31 01:58:02.361467 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHYRN7P8KvAPcEahkSZ2nRNPvRiA4tNEAaZZUo92sIKo2BKq2SPq4WZU9+iM/nKEL8RfUnMhPQ6MKJ+SDFuuKgM=) 2026-03-31 01:58:13.700121 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCPpMpgsPdZgOCjpXw1Hoh4gc+AiPDYcwVKmJAv+dvN1nyrGhV1PxbIaPnbnSrmVbuxGCSmoaX9sycHSVV71LN+grJZArSAmeAsY7pXLhiqxJF5s6hJgF2bhf+V2JgGgMrk5qHieMFBrYMrqgTBpnfvAeKRmHO6fduzSV49IWciDzSzu6TUrZ+QCSforWG1vIgOX4p8kx6HxSU9hiJ/tQGQeNGUUB2t64zBsiEWgH6hIX1iQQMXOf3kTGXwKafYkXaNeeyBmY4icr6kOvBa96PAVAD2spYx/NLOxgT0wn24YXTZUnVK9r+77mj5TEgGGR99zA5vwifJQ+z44jZlQo1BygTjDfmPUqrAMbhiTi9L8lWbDKexg+4nAimUqri0r1r3+N5RPgQY1EhJQ2QwyQ3nybatG8m3P1kToUsm2Sx0KVCtX2dCafnZx/73+NT5t2mBEeooXBq4i/Cx8lAbVMABQV/pS5CEcMcap1DOomXHj+JdHI8ylzt5hT6I5bak4uM=) 2026-03-31 01:58:13.700226 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII03D+XF2pnC9YU3Ynkzxl7rzqg/JyrMvy8LWQDgE2em) 2026-03-31 01:58:13.700238 | orchestrator | 2026-03-31 01:58:13.700245 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-31 01:58:13.700253 | orchestrator | Tuesday 31 March 2026 01:58:02 +0000 (0:00:01.146) 0:00:13.767 ********* 2026-03-31 01:58:13.700260 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNRy54ZYjRlipZvOl0wX3k79NOFLyYs9WOO+EGuzPfZbET9HnYSjq08XbcYDTbO75aHybM9vy+sGjewEwe2lc1o=) 2026-03-31 01:58:13.700268 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDpzGdp+dnXne69V7NRzrAhl0RtdQG56kMOcjEjXwyBss87/hmfu/CU+NKPeMs4lVxouqMjR4aELZZed0VPnr7ymEZ6zYBkAw6MHN1bB0ryml7F8CljVJISRdjfcOFIQpy+ADvsAR5U950AEfSfwTqb+YpGqaX0/rhBj3gV6TBfcKu3wjlMQfZMtfz10ZyhtFnfRxjryA4N81QCglCjO+A2aYklSsrCIuoexi/b8q5Es7GsKjxCwzAQwlDPlxaa2tkPvSpz6mMVcF3gbwM/txpAhh3u8pQZdJsb5YPmk/FndYDTczDcH1eNpsanzJkr+agoNxV3DO55P6IfAxSIq8jDFRvchY/V3zO+pmb7Z2YMSg4rAQHqmB5EZf+MzJC2jNLn4ky3kjbKOgHdfxYGCKIN6GJ0bOx6OAUuJkBXA3h8VgeqAj89DVj2qpu/bNyIABxAd1m/osk21xYj50uw4BlbyKSSP8DfssTPVu1DnPHnVVXKWQeo0UcencY28RGhetk=) 2026-03-31 01:58:13.700293 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMaJH2+RmD3Po/0Y23+e1TzXooKsKfJbySbWmPIPbMFC) 2026-03-31 01:58:13.700299 | orchestrator | 2026-03-31 01:58:13.700305 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-03-31 01:58:13.700313 | orchestrator | Tuesday 31 March 2026 01:58:03 +0000 (0:00:01.154) 0:00:14.921 ********* 2026-03-31 01:58:13.700320 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-31 01:58:13.700326 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-31 01:58:13.700332 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-31 01:58:13.700338 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-31 01:58:13.700343 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-31 01:58:13.700349 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-31 01:58:13.700356 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-31 01:58:13.700361 | orchestrator | 2026-03-31 01:58:13.700369 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-03-31 01:58:13.700376 | orchestrator | Tuesday 31 March 2026 01:58:09 +0000 (0:00:05.557) 0:00:20.478 ********* 2026-03-31 01:58:13.700383 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-31 01:58:13.700391 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-31 01:58:13.700397 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-31 01:58:13.700404 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-31 01:58:13.700411 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-31 01:58:13.700417 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-31 01:58:13.700423 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-31 01:58:13.700448 | orchestrator | 2026-03-31 01:58:13.700470 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-31 01:58:13.700483 | orchestrator | Tuesday 31 March 2026 01:58:09 +0000 (0:00:00.195) 0:00:20.674 ********* 2026-03-31 01:58:13.700507 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCrHq8hIrnP4eREOWCDmcOXjngP/SBB96Y2L3kyD3DvuaLwWDWK43gTAZt5uX1uk4ybCLEaqn5EcL/WqaEmG/QLvCYhI3WRpArr6u8LmZeshaJqQ69KO2DPfLY+vz5ImAGWepqXgzgqsU7psyY3M6ig5IsuURKmx/RKG5Oyu5DxyLu4wVEP1L9487jMgNTNdTMnyWAXzoKgkt5d8JRPHvWmKS1XGodxmBiE45+UYS2xr9wlh7R3WF7KKItfIwYDh+mt79cjM1w9EnK/J9wCSm26hqexjy06fFFY9WZ12n71Zw5B10Mq5K9oLxUO9zGc4OENCQhLgAKjCM4JA3n3Y4a+/oKMQ7Lm1VhK8GfmEJcRGdgEpqW8KfZCzUn7aqLGz5XnRihhMx3vcUqA7fvkgQEqUU8mv6QY8bnHr2CDPdSXvLuuitzA0P/OJm9Yfv3Qq0zPYXR99YQjNAATviWtor7bMaUe8TMyWoyDWqaU9LqmdROuppsjeVgaYdNqFtbwY0=) 2026-03-31 01:58:13.700514 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI8308BNIi6iKwrGBJr6Bf1pwzA2LYCcDaJXSRMUJ9IQPuPMgbtpY4hN9XLlPCeF2LP2RLP0G3uly6UoTPPfx2M=) 2026-03-31 01:58:13.700527 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBEmDKrrDxxWa0hoo1V3bl53r9C18c60/PljE4vTgPX4) 2026-03-31 01:58:13.700534 | orchestrator | 2026-03-31 01:58:13.700540 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-31 01:58:13.700547 | orchestrator | Tuesday 31 March 2026 01:58:10 +0000 (0:00:01.089) 0:00:21.763 ********* 2026-03-31 01:58:13.700556 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCO5xEaTr+8yr16pE6sjndDkDIaKIi5TFWPZgePArKoDAeFyzsC+IVRgjifRo4ut7PJXTv+6BzofVUl//8nGr+zrf7hH9WNjtCWJJh7kybVbDjwCIx++JqDt9Er9d+PEfWzw2SfS7+iB/KEuSR8aJ7KzP8FBN0q0b3P292RMxXZBQxOEU0erqJOeeUqE6im/A6/5+MIyx1NLud2jD4jb+HM6C8Ws+655v8PFNDzaqHgt85Rjhg3xZXGVIrMm+bCn8FlITf6CsybEZVH0rs+t7h2qoPV3MrwwAhkn5F0aoglMV/1zxZ4UhtC+DjZLOi3OfL/0A4GCDldW/xUvDUOprpvqMMjDuQdfCseMZWe0xFd+QdUx8DT5ZUzNf9iQrOGUlRf7dAJvqYFs3DAUM2rAM2qE1QhxogvDxqWakJP7fsNGfR9ge8I2aksxDdOFdI/iEc1tpoBonK0fBATIK6LaWS8QMDaq136dO13ekIGbg6cNCQdrQGnVRcyTqS2cK6Go9M=) 2026-03-31 01:58:13.700563 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGzKZ6wIiFGyi4uqiYpyBBTRPewWZsfhQDGCKHKtGPsd0jkj1HO7Z1hKcvX+D0SmgMo4ZKOhA/9HL/oa3wk1scA=) 2026-03-31 01:58:13.700569 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINj3R2mKsDUx6ei4NDgBpokK+5sThTgjOXsNa/VHdBi9) 2026-03-31 01:58:13.700575 | orchestrator | 2026-03-31 01:58:13.700581 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-31 01:58:13.700588 | orchestrator | Tuesday 31 March 2026 01:58:11 +0000 (0:00:01.089) 0:00:22.852 ********* 2026-03-31 01:58:13.700594 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCY4maVTxGhKc5R+YZKqiHD3nLd9crNlDOe88A6VUJwDY4ttZmpJIaErA77vdDGVWpsbxZUBZu1NFX3WF4khAz5LWG2TiMXlfyZkHbI7IonXUNAZ2uJKl2J2ef6QFtP/pHCPaZZw2RPgs/Be2FOdUeVgT4tDLJ1e/xnHYyUrB90o4LvQ7D0rOGisKTwuGXi3eTbXCUwEch1BPcOuPT1fekvVdD4Gi4Q/z5M5v2PdwK4QUS6B32zpFbA+81FkBuKR7Zx5K7nga3vmpaxB+ugX+rADt3mp7ik/Qp/3rfL3BqcIp3PPIFIhKqDhvnAtDC+kwwCzhI5hrgztxNA4bN9Nb8KKp3epXdVFc7tQSN1KIRGBRJCEmkn6hLBiDRVwvVcP7OUCkn5yXFJFKtAzeZM/OsxMfRSnWjbIPDMLmyBS1YdtMOAQiN5fsnkrPUafLx8PhxFuppVOUUFeiA8qvylVobB2mui6SiIbFYdBEjmmDTnH08WY+dQMdaMwV/K7SS5oVE=) 2026-03-31 01:58:13.700601 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHfCEYr52/KYN8k2U9R01CLbL8NJ8IeMQ/afdbye2/fN) 2026-03-31 01:58:13.700607 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNcrR23a8DZgygis7IV4dCrF9Nd6WeaHlWMhon8JsqHpW3l6TvPZhVV3zSdJbKf578gvoju8anTCv/CHxq8OWNg=) 2026-03-31 01:58:13.700614 | orchestrator | 2026-03-31 01:58:13.700621 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-31 01:58:13.700627 | orchestrator | Tuesday 31 March 2026 01:58:12 +0000 (0:00:01.123) 0:00:23.976 ********* 2026-03-31 01:58:13.700639 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3zTWIqfiFX8Qhc5qdVp1sEFn/x1uwqqGZqnQ/eEWoCqKzL16hqZBO+9TOW3TaoFkOp9a+f4aroCd6UMG+s+/QT/IHwyti4ZHtNX5jGbL3bJxuS3LxPfcOj838HZUBvrDrVd70hvjaWR2iTjmAxWmo5dMie3l/KpuSX3Xz1j5tusQEAvu1sTLko4sIFWSTzZ8IBEp0mfAEdGNUmwRoIJDG5b0b1/UYHMVKF9FBmrXaqoJFVH75sD8yAkJ9uXBfqCIN0JZ8Y0bWwgZoE1OEvm7+V+Ik0FhenIB8q6/a+d0Qk/txmZydzKEp7mXgQsGUS44iJu6sJPPcOJHO6/Za478D90N5lvYBnnhzBdogP+VSeuLg0uTb3Dd6hRjWdDxe7OBBInRTtW9FL5plvIoX/s3MEOtJDpIoTtpC2Nmpoj4NEsOm2NNyXU4ig3KShEJ8a9LijovSn7xU+88oiea3iv+OtgBHCHI30BcJMdqD1kUvHp1PUdHCV1cKJRrLPwcC09c=) 2026-03-31 01:58:18.413417 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHRuRlR+Ux2+QQgpVVuFaHEnlZ7CokFIj9oReREI/rRaqHkPbOheb9byKKDr9jFV4/D88PPvO4R3fRi7q64q3JE=) 2026-03-31 01:58:18.413527 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMy5tV/4dj9lDS9GVmhmlPWcR/+3Ij5BsxEKdIhR2uvJ) 2026-03-31 01:58:18.413570 | orchestrator | 2026-03-31 01:58:18.413583 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-31 01:58:18.413595 | orchestrator | Tuesday 31 March 2026 01:58:13 +0000 (0:00:01.129) 0:00:25.106 ********* 2026-03-31 01:58:18.413607 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1xQf+wcHqny+A3lBbVVbZwG5stY06UgTfl0NJW+l6lo7IYJgymKKJr+X0fk9jA5RYgajWRDMiUCGBu81YIaP0a8ViYko1h+aAZLFPD5yJp/BpbyF5STmQAbxq37ERlETnmqq2Woh0A1XRnkhXSBcYsPIDoPzUj/2hhVJkN75ENH74tsP0DdVIuKuNY+1UGedGBmzx9X8QHZWWNSgqtsG0xysPJGPOWgTysEyAhzM3yWHo54cv//3VW11zbHTq9OwBCLkjfrIEdtYZ/ZfVxLm5C/2NZzwJoj451Kmx55WcEJzYif8O3eo2CQ57h14ExFB6vl7p8C8BdAhEEqfSDAXbzOBbAhDzrP1+HNilWbHnlWOmZisW+YwSPgMHwTLIv2+Cai50F8AgxPUr28vfgkPz4jYJR+rSbmYxjwZjNaatCDUPo7629EGbU46giSHxtDHllDdCGsSXOHVyrewvtTk3KNTEs5720c+KjB0zCGJbMBIdMrxwgEYaSoBzSIHoUnE=) 2026-03-31 01:58:18.413619 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDnkOBOqmcR2d1rrqjn5iXknAU5qcu+Suvk4ZrjYLnpDko/f39ZV5lwyHEPdborwDEtZSEv/s8pQIxS1KHsJ4q4=) 2026-03-31 01:58:18.413631 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOZtelO7ewWQTwbFWZ8nzfdt65cfTPUAw2ktH2FAsmTr) 2026-03-31 01:58:18.413642 | orchestrator | 2026-03-31 01:58:18.413653 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-31 01:58:18.413663 | orchestrator | Tuesday 31 March 2026 01:58:14 +0000 (0:00:01.120) 0:00:26.227 ********* 2026-03-31 01:58:18.413674 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCPpMpgsPdZgOCjpXw1Hoh4gc+AiPDYcwVKmJAv+dvN1nyrGhV1PxbIaPnbnSrmVbuxGCSmoaX9sycHSVV71LN+grJZArSAmeAsY7pXLhiqxJF5s6hJgF2bhf+V2JgGgMrk5qHieMFBrYMrqgTBpnfvAeKRmHO6fduzSV49IWciDzSzu6TUrZ+QCSforWG1vIgOX4p8kx6HxSU9hiJ/tQGQeNGUUB2t64zBsiEWgH6hIX1iQQMXOf3kTGXwKafYkXaNeeyBmY4icr6kOvBa96PAVAD2spYx/NLOxgT0wn24YXTZUnVK9r+77mj5TEgGGR99zA5vwifJQ+z44jZlQo1BygTjDfmPUqrAMbhiTi9L8lWbDKexg+4nAimUqri0r1r3+N5RPgQY1EhJQ2QwyQ3nybatG8m3P1kToUsm2Sx0KVCtX2dCafnZx/73+NT5t2mBEeooXBq4i/Cx8lAbVMABQV/pS5CEcMcap1DOomXHj+JdHI8ylzt5hT6I5bak4uM=) 2026-03-31 01:58:18.413684 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHYRN7P8KvAPcEahkSZ2nRNPvRiA4tNEAaZZUo92sIKo2BKq2SPq4WZU9+iM/nKEL8RfUnMhPQ6MKJ+SDFuuKgM=) 2026-03-31 01:58:18.413694 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII03D+XF2pnC9YU3Ynkzxl7rzqg/JyrMvy8LWQDgE2em) 2026-03-31 01:58:18.413704 | orchestrator | 2026-03-31 01:58:18.413715 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-31 01:58:18.413724 | orchestrator | Tuesday 31 March 2026 01:58:15 +0000 (0:00:01.119) 0:00:27.346 ********* 2026-03-31 01:58:18.413735 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMaJH2+RmD3Po/0Y23+e1TzXooKsKfJbySbWmPIPbMFC) 2026-03-31 01:58:18.413787 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDpzGdp+dnXne69V7NRzrAhl0RtdQG56kMOcjEjXwyBss87/hmfu/CU+NKPeMs4lVxouqMjR4aELZZed0VPnr7ymEZ6zYBkAw6MHN1bB0ryml7F8CljVJISRdjfcOFIQpy+ADvsAR5U950AEfSfwTqb+YpGqaX0/rhBj3gV6TBfcKu3wjlMQfZMtfz10ZyhtFnfRxjryA4N81QCglCjO+A2aYklSsrCIuoexi/b8q5Es7GsKjxCwzAQwlDPlxaa2tkPvSpz6mMVcF3gbwM/txpAhh3u8pQZdJsb5YPmk/FndYDTczDcH1eNpsanzJkr+agoNxV3DO55P6IfAxSIq8jDFRvchY/V3zO+pmb7Z2YMSg4rAQHqmB5EZf+MzJC2jNLn4ky3kjbKOgHdfxYGCKIN6GJ0bOx6OAUuJkBXA3h8VgeqAj89DVj2qpu/bNyIABxAd1m/osk21xYj50uw4BlbyKSSP8DfssTPVu1DnPHnVVXKWQeo0UcencY28RGhetk=) 2026-03-31 01:58:18.413801 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNRy54ZYjRlipZvOl0wX3k79NOFLyYs9WOO+EGuzPfZbET9HnYSjq08XbcYDTbO75aHybM9vy+sGjewEwe2lc1o=) 2026-03-31 01:58:18.413811 | orchestrator | 2026-03-31 01:58:18.413822 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-03-31 01:58:18.413841 | orchestrator | Tuesday 31 March 2026 01:58:17 +0000 (0:00:01.153) 0:00:28.500 ********* 2026-03-31 01:58:18.413853 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-31 01:58:18.413864 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-31 01:58:18.413893 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-31 01:58:18.413905 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-31 01:58:18.413916 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-31 01:58:18.413926 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-31 01:58:18.413936 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-31 01:58:18.413947 | orchestrator | skipping: [testbed-manager] 2026-03-31 01:58:18.413959 | orchestrator | 2026-03-31 01:58:18.413970 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-03-31 01:58:18.413981 | orchestrator | Tuesday 31 March 2026 01:58:17 +0000 (0:00:00.177) 0:00:28.678 ********* 2026-03-31 01:58:18.413992 | orchestrator | skipping: [testbed-manager] 2026-03-31 01:58:18.414003 | orchestrator | 2026-03-31 01:58:18.414014 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-03-31 01:58:18.414079 | orchestrator | Tuesday 31 March 2026 01:58:17 +0000 (0:00:00.069) 0:00:28.747 ********* 2026-03-31 01:58:18.414090 | orchestrator | skipping: [testbed-manager] 2026-03-31 01:58:18.414102 | orchestrator | 2026-03-31 01:58:18.414114 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-03-31 01:58:18.414125 | orchestrator | Tuesday 31 March 2026 01:58:17 +0000 (0:00:00.059) 0:00:28.807 ********* 2026-03-31 01:58:18.414135 | orchestrator | changed: [testbed-manager] 2026-03-31 01:58:18.414147 | orchestrator | 2026-03-31 01:58:18.414159 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 01:58:18.414171 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-31 01:58:18.414183 | orchestrator | 2026-03-31 01:58:18.414193 | orchestrator | 2026-03-31 01:58:18.414204 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 01:58:18.414215 | orchestrator | Tuesday 31 March 2026 01:58:18 +0000 (0:00:00.782) 0:00:29.589 ********* 2026-03-31 01:58:18.414231 | orchestrator | =============================================================================== 2026-03-31 01:58:18.414245 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.41s 2026-03-31 01:58:18.414328 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.56s 2026-03-31 01:58:18.414342 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.26s 2026-03-31 01:58:18.414353 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2026-03-31 01:58:18.414364 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-03-31 01:58:18.414374 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-03-31 01:58:18.414384 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-03-31 01:58:18.414394 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-03-31 01:58:18.414404 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-03-31 01:58:18.414415 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-03-31 01:58:18.414426 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-03-31 01:58:18.414435 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-03-31 01:58:18.414443 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-03-31 01:58:18.414453 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-03-31 01:58:18.414475 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-03-31 01:58:18.414486 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-03-31 01:58:18.414496 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.78s 2026-03-31 01:58:18.414507 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.20s 2026-03-31 01:58:18.414519 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2026-03-31 01:58:18.414529 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.18s 2026-03-31 01:58:18.747877 | orchestrator | + osism apply squid 2026-03-31 01:58:30.992895 | orchestrator | 2026-03-31 01:58:30 | INFO  | Task aa40ec91-51db-4363-9368-d7a0506e9206 (squid) was prepared for execution. 2026-03-31 01:58:30.992974 | orchestrator | 2026-03-31 01:58:30 | INFO  | It takes a moment until task aa40ec91-51db-4363-9368-d7a0506e9206 (squid) has been started and output is visible here. 2026-03-31 02:00:33.487644 | orchestrator | 2026-03-31 02:00:33.487766 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-03-31 02:00:33.487785 | orchestrator | 2026-03-31 02:00:33.487797 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-03-31 02:00:33.487809 | orchestrator | Tuesday 31 March 2026 01:58:35 +0000 (0:00:00.183) 0:00:00.183 ********* 2026-03-31 02:00:33.487821 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-03-31 02:00:33.487833 | orchestrator | 2026-03-31 02:00:33.487844 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-03-31 02:00:33.487855 | orchestrator | Tuesday 31 March 2026 01:58:35 +0000 (0:00:00.086) 0:00:00.270 ********* 2026-03-31 02:00:33.487866 | orchestrator | ok: [testbed-manager] 2026-03-31 02:00:33.487981 | orchestrator | 2026-03-31 02:00:33.487994 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-03-31 02:00:33.488006 | orchestrator | Tuesday 31 March 2026 01:58:37 +0000 (0:00:01.615) 0:00:01.885 ********* 2026-03-31 02:00:33.488018 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-03-31 02:00:33.488029 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-03-31 02:00:33.488040 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-03-31 02:00:33.488051 | orchestrator | 2026-03-31 02:00:33.488062 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-03-31 02:00:33.488073 | orchestrator | Tuesday 31 March 2026 01:58:38 +0000 (0:00:01.294) 0:00:03.180 ********* 2026-03-31 02:00:33.488084 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-03-31 02:00:33.488096 | orchestrator | 2026-03-31 02:00:33.488107 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-03-31 02:00:33.488118 | orchestrator | Tuesday 31 March 2026 01:58:39 +0000 (0:00:01.145) 0:00:04.325 ********* 2026-03-31 02:00:33.488128 | orchestrator | ok: [testbed-manager] 2026-03-31 02:00:33.488139 | orchestrator | 2026-03-31 02:00:33.488150 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-03-31 02:00:33.488161 | orchestrator | Tuesday 31 March 2026 01:58:39 +0000 (0:00:00.384) 0:00:04.710 ********* 2026-03-31 02:00:33.488176 | orchestrator | changed: [testbed-manager] 2026-03-31 02:00:33.488195 | orchestrator | 2026-03-31 02:00:33.488215 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-03-31 02:00:33.488233 | orchestrator | Tuesday 31 March 2026 01:58:40 +0000 (0:00:00.996) 0:00:05.706 ********* 2026-03-31 02:00:33.488253 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-03-31 02:00:33.488277 | orchestrator | ok: [testbed-manager] 2026-03-31 02:00:33.488293 | orchestrator | 2026-03-31 02:00:33.488309 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-03-31 02:00:33.488361 | orchestrator | Tuesday 31 March 2026 01:59:16 +0000 (0:00:35.674) 0:00:41.380 ********* 2026-03-31 02:00:33.488379 | orchestrator | changed: [testbed-manager] 2026-03-31 02:00:33.488397 | orchestrator | 2026-03-31 02:00:33.488415 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-03-31 02:00:33.488434 | orchestrator | Tuesday 31 March 2026 01:59:32 +0000 (0:00:15.876) 0:00:57.257 ********* 2026-03-31 02:00:33.488454 | orchestrator | Pausing for 60 seconds 2026-03-31 02:00:33.488474 | orchestrator | changed: [testbed-manager] 2026-03-31 02:00:33.488493 | orchestrator | 2026-03-31 02:00:33.488512 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-03-31 02:00:33.488530 | orchestrator | Tuesday 31 March 2026 02:00:32 +0000 (0:01:00.087) 0:01:57.345 ********* 2026-03-31 02:00:33.488549 | orchestrator | ok: [testbed-manager] 2026-03-31 02:00:33.488568 | orchestrator | 2026-03-31 02:00:33.488587 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-03-31 02:00:33.488605 | orchestrator | Tuesday 31 March 2026 02:00:32 +0000 (0:00:00.078) 0:01:57.423 ********* 2026-03-31 02:00:33.488624 | orchestrator | changed: [testbed-manager] 2026-03-31 02:00:33.488636 | orchestrator | 2026-03-31 02:00:33.488646 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 02:00:33.488657 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 02:00:33.488668 | orchestrator | 2026-03-31 02:00:33.488679 | orchestrator | 2026-03-31 02:00:33.488690 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 02:00:33.488705 | orchestrator | Tuesday 31 March 2026 02:00:33 +0000 (0:00:00.593) 0:01:58.017 ********* 2026-03-31 02:00:33.488723 | orchestrator | =============================================================================== 2026-03-31 02:00:33.488741 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2026-03-31 02:00:33.488759 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 35.67s 2026-03-31 02:00:33.488802 | orchestrator | osism.services.squid : Restart squid service --------------------------- 15.88s 2026-03-31 02:00:33.488824 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.62s 2026-03-31 02:00:33.488844 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.29s 2026-03-31 02:00:33.488864 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.15s 2026-03-31 02:00:33.488912 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 1.00s 2026-03-31 02:00:33.488931 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.59s 2026-03-31 02:00:33.488949 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.38s 2026-03-31 02:00:33.488967 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2026-03-31 02:00:33.488984 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.08s 2026-03-31 02:00:33.825241 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-31 02:00:33.825534 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-31 02:00:33.870439 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-31 02:00:33.870525 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-03-31 02:00:33.878244 | orchestrator | + set -e 2026-03-31 02:00:33.878299 | orchestrator | + NAMESPACE=kolla/release 2026-03-31 02:00:33.878310 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-31 02:00:33.883764 | orchestrator | ++ semver 9.5.0 9.0.0 2026-03-31 02:00:33.953411 | orchestrator | + [[ 1 -lt 0 ]] 2026-03-31 02:00:33.954806 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-03-31 02:00:46.129957 | orchestrator | 2026-03-31 02:00:46 | INFO  | Task 4b8bb145-4ae8-4382-a117-3b885799de04 (operator) was prepared for execution. 2026-03-31 02:00:46.130127 | orchestrator | 2026-03-31 02:00:46 | INFO  | It takes a moment until task 4b8bb145-4ae8-4382-a117-3b885799de04 (operator) has been started and output is visible here. 2026-03-31 02:01:03.616968 | orchestrator | 2026-03-31 02:01:03.617083 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-03-31 02:01:03.617102 | orchestrator | 2026-03-31 02:01:03.617114 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-31 02:01:03.617127 | orchestrator | Tuesday 31 March 2026 02:00:50 +0000 (0:00:00.153) 0:00:00.153 ********* 2026-03-31 02:01:03.617138 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:01:03.617150 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:01:03.617161 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:01:03.617172 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:01:03.617183 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:01:03.617194 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:01:03.617205 | orchestrator | 2026-03-31 02:01:03.617216 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-03-31 02:01:03.617227 | orchestrator | Tuesday 31 March 2026 02:00:53 +0000 (0:00:03.384) 0:00:03.537 ********* 2026-03-31 02:01:03.617238 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:01:03.617249 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:01:03.617260 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:01:03.617270 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:01:03.617281 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:01:03.617292 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:01:03.617303 | orchestrator | 2026-03-31 02:01:03.617314 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-03-31 02:01:03.617324 | orchestrator | 2026-03-31 02:01:03.617335 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-31 02:01:03.617346 | orchestrator | Tuesday 31 March 2026 02:00:54 +0000 (0:00:00.770) 0:00:04.307 ********* 2026-03-31 02:01:03.617357 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:01:03.617368 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:01:03.617379 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:01:03.617389 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:01:03.617400 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:01:03.617412 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:01:03.617423 | orchestrator | 2026-03-31 02:01:03.617434 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-31 02:01:03.617461 | orchestrator | Tuesday 31 March 2026 02:00:54 +0000 (0:00:00.212) 0:00:04.520 ********* 2026-03-31 02:01:03.617475 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:01:03.617489 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:01:03.617502 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:01:03.617514 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:01:03.617527 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:01:03.617540 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:01:03.617553 | orchestrator | 2026-03-31 02:01:03.617566 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-31 02:01:03.617579 | orchestrator | Tuesday 31 March 2026 02:00:55 +0000 (0:00:00.194) 0:00:04.715 ********* 2026-03-31 02:01:03.617592 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:01:03.617605 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:01:03.617619 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:01:03.617630 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:01:03.617641 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:01:03.617652 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:01:03.617663 | orchestrator | 2026-03-31 02:01:03.617674 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-31 02:01:03.617720 | orchestrator | Tuesday 31 March 2026 02:00:55 +0000 (0:00:00.659) 0:00:05.374 ********* 2026-03-31 02:01:03.617743 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:01:03.617754 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:01:03.617765 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:01:03.617776 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:01:03.617787 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:01:03.617797 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:01:03.617832 | orchestrator | 2026-03-31 02:01:03.617844 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-31 02:01:03.617855 | orchestrator | Tuesday 31 March 2026 02:00:56 +0000 (0:00:00.858) 0:00:06.233 ********* 2026-03-31 02:01:03.617866 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-03-31 02:01:03.617877 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-03-31 02:01:03.617888 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-03-31 02:01:03.617936 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-03-31 02:01:03.617956 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-03-31 02:01:03.617976 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-03-31 02:01:03.617993 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-03-31 02:01:03.618008 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-03-31 02:01:03.618081 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-03-31 02:01:03.618094 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-03-31 02:01:03.618105 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-03-31 02:01:03.618116 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-03-31 02:01:03.618126 | orchestrator | 2026-03-31 02:01:03.618137 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-31 02:01:03.618148 | orchestrator | Tuesday 31 March 2026 02:00:57 +0000 (0:00:01.177) 0:00:07.410 ********* 2026-03-31 02:01:03.618159 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:01:03.618205 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:01:03.618218 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:01:03.618229 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:01:03.618240 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:01:03.618250 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:01:03.618262 | orchestrator | 2026-03-31 02:01:03.618273 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-31 02:01:03.618285 | orchestrator | Tuesday 31 March 2026 02:01:00 +0000 (0:00:02.264) 0:00:09.675 ********* 2026-03-31 02:01:03.618296 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-03-31 02:01:03.618307 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-03-31 02:01:03.618318 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-03-31 02:01:03.618329 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-03-31 02:01:03.618360 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-03-31 02:01:03.618371 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-03-31 02:01:03.618382 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-03-31 02:01:03.618393 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-03-31 02:01:03.618404 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-03-31 02:01:03.618414 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-03-31 02:01:03.618425 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-03-31 02:01:03.618436 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-03-31 02:01:03.618447 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-03-31 02:01:03.618457 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-03-31 02:01:03.618468 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-03-31 02:01:03.618479 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-03-31 02:01:03.618489 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-03-31 02:01:03.618500 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-03-31 02:01:03.618511 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-03-31 02:01:03.618521 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-03-31 02:01:03.618543 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-03-31 02:01:03.618554 | orchestrator | 2026-03-31 02:01:03.618565 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-31 02:01:03.618577 | orchestrator | Tuesday 31 March 2026 02:01:01 +0000 (0:00:01.258) 0:00:10.933 ********* 2026-03-31 02:01:03.618588 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:01:03.618599 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:01:03.618610 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:01:03.618621 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:01:03.618631 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:01:03.618642 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:01:03.618653 | orchestrator | 2026-03-31 02:01:03.618664 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-31 02:01:03.618675 | orchestrator | Tuesday 31 March 2026 02:01:01 +0000 (0:00:00.184) 0:00:11.118 ********* 2026-03-31 02:01:03.618686 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:01:03.618697 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:01:03.618708 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:01:03.618719 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:01:03.618730 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:01:03.618741 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:01:03.618752 | orchestrator | 2026-03-31 02:01:03.618763 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-31 02:01:03.618774 | orchestrator | Tuesday 31 March 2026 02:01:01 +0000 (0:00:00.211) 0:00:11.329 ********* 2026-03-31 02:01:03.618785 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:01:03.618796 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:01:03.618806 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:01:03.618817 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:01:03.618828 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:01:03.618839 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:01:03.618849 | orchestrator | 2026-03-31 02:01:03.618860 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-31 02:01:03.618871 | orchestrator | Tuesday 31 March 2026 02:01:02 +0000 (0:00:00.623) 0:00:11.953 ********* 2026-03-31 02:01:03.618882 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:01:03.618922 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:01:03.618935 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:01:03.618946 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:01:03.618956 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:01:03.618967 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:01:03.618978 | orchestrator | 2026-03-31 02:01:03.618988 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-31 02:01:03.619011 | orchestrator | Tuesday 31 March 2026 02:01:02 +0000 (0:00:00.182) 0:00:12.135 ********* 2026-03-31 02:01:03.619022 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-31 02:01:03.619033 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-31 02:01:03.619044 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:01:03.619054 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:01:03.619065 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-31 02:01:03.619076 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:01:03.619087 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-31 02:01:03.619097 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:01:03.619108 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-31 02:01:03.619119 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-31 02:01:03.619130 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:01:03.619140 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:01:03.619151 | orchestrator | 2026-03-31 02:01:03.619162 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-31 02:01:03.619173 | orchestrator | Tuesday 31 March 2026 02:01:03 +0000 (0:00:00.750) 0:00:12.885 ********* 2026-03-31 02:01:03.619191 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:01:03.619202 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:01:03.619213 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:01:03.619224 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:01:03.619235 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:01:03.619245 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:01:03.619256 | orchestrator | 2026-03-31 02:01:03.619268 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-31 02:01:03.619278 | orchestrator | Tuesday 31 March 2026 02:01:03 +0000 (0:00:00.165) 0:00:13.051 ********* 2026-03-31 02:01:03.619289 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:01:03.619300 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:01:03.619311 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:01:03.619321 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:01:03.619340 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:01:05.026285 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:01:05.026404 | orchestrator | 2026-03-31 02:01:05.026431 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-31 02:01:05.026449 | orchestrator | Tuesday 31 March 2026 02:01:03 +0000 (0:00:00.200) 0:00:13.251 ********* 2026-03-31 02:01:05.026466 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:01:05.026481 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:01:05.026497 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:01:05.026514 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:01:05.026530 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:01:05.026546 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:01:05.026563 | orchestrator | 2026-03-31 02:01:05.026579 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-31 02:01:05.026595 | orchestrator | Tuesday 31 March 2026 02:01:03 +0000 (0:00:00.183) 0:00:13.434 ********* 2026-03-31 02:01:05.026612 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:01:05.026627 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:01:05.026642 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:01:05.026657 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:01:05.026671 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:01:05.026688 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:01:05.026704 | orchestrator | 2026-03-31 02:01:05.026721 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-31 02:01:05.026737 | orchestrator | Tuesday 31 March 2026 02:01:04 +0000 (0:00:00.693) 0:00:14.128 ********* 2026-03-31 02:01:05.026754 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:01:05.026771 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:01:05.026789 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:01:05.026807 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:01:05.026824 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:01:05.026842 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:01:05.026860 | orchestrator | 2026-03-31 02:01:05.026873 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 02:01:05.026939 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-31 02:01:05.026961 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-31 02:01:05.026978 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-31 02:01:05.026992 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-31 02:01:05.027007 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-31 02:01:05.027048 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-31 02:01:05.027064 | orchestrator | 2026-03-31 02:01:05.027078 | orchestrator | 2026-03-31 02:01:05.027093 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 02:01:05.027109 | orchestrator | Tuesday 31 March 2026 02:01:04 +0000 (0:00:00.259) 0:00:14.388 ********* 2026-03-31 02:01:05.027125 | orchestrator | =============================================================================== 2026-03-31 02:01:05.027140 | orchestrator | Gathering Facts --------------------------------------------------------- 3.38s 2026-03-31 02:01:05.027156 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 2.26s 2026-03-31 02:01:05.027172 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.26s 2026-03-31 02:01:05.027190 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.18s 2026-03-31 02:01:05.027205 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.86s 2026-03-31 02:01:05.027224 | orchestrator | Do not require tty for all users ---------------------------------------- 0.77s 2026-03-31 02:01:05.027240 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.75s 2026-03-31 02:01:05.027257 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.69s 2026-03-31 02:01:05.027274 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.66s 2026-03-31 02:01:05.027291 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.62s 2026-03-31 02:01:05.027306 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.26s 2026-03-31 02:01:05.027322 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.21s 2026-03-31 02:01:05.027340 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.21s 2026-03-31 02:01:05.027356 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.20s 2026-03-31 02:01:05.027372 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.19s 2026-03-31 02:01:05.027387 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.18s 2026-03-31 02:01:05.027397 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.18s 2026-03-31 02:01:05.027407 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.18s 2026-03-31 02:01:05.027416 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.17s 2026-03-31 02:01:05.350110 | orchestrator | + osism apply --environment custom facts 2026-03-31 02:01:07.272388 | orchestrator | 2026-03-31 02:01:07 | INFO  | Trying to run play facts in environment custom 2026-03-31 02:01:17.431330 | orchestrator | 2026-03-31 02:01:17 | INFO  | Task 6bea137c-692f-44cf-b568-7ef290bb8a7c (facts) was prepared for execution. 2026-03-31 02:01:17.431441 | orchestrator | 2026-03-31 02:01:17 | INFO  | It takes a moment until task 6bea137c-692f-44cf-b568-7ef290bb8a7c (facts) has been started and output is visible here. 2026-03-31 02:02:03.297271 | orchestrator | 2026-03-31 02:02:03.297381 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-03-31 02:02:03.297396 | orchestrator | 2026-03-31 02:02:03.297407 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-31 02:02:03.297418 | orchestrator | Tuesday 31 March 2026 02:01:21 +0000 (0:00:00.086) 0:00:00.086 ********* 2026-03-31 02:02:03.297428 | orchestrator | ok: [testbed-manager] 2026-03-31 02:02:03.297439 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:02:03.297450 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:02:03.297460 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:02:03.297470 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:02:03.297480 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:02:03.297513 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:02:03.297523 | orchestrator | 2026-03-31 02:02:03.297534 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-03-31 02:02:03.297544 | orchestrator | Tuesday 31 March 2026 02:01:23 +0000 (0:00:01.415) 0:00:01.501 ********* 2026-03-31 02:02:03.297554 | orchestrator | ok: [testbed-manager] 2026-03-31 02:02:03.297563 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:02:03.297573 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:02:03.297583 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:02:03.297592 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:02:03.297602 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:02:03.297612 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:02:03.297622 | orchestrator | 2026-03-31 02:02:03.297632 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-03-31 02:02:03.297642 | orchestrator | 2026-03-31 02:02:03.297652 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-31 02:02:03.297662 | orchestrator | Tuesday 31 March 2026 02:01:24 +0000 (0:00:01.219) 0:00:02.721 ********* 2026-03-31 02:02:03.297672 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:02:03.297682 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:02:03.297691 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:02:03.297701 | orchestrator | 2026-03-31 02:02:03.297711 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-31 02:02:03.297722 | orchestrator | Tuesday 31 March 2026 02:01:24 +0000 (0:00:00.108) 0:00:02.830 ********* 2026-03-31 02:02:03.297732 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:02:03.297741 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:02:03.297751 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:02:03.297761 | orchestrator | 2026-03-31 02:02:03.297770 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-31 02:02:03.297780 | orchestrator | Tuesday 31 March 2026 02:01:24 +0000 (0:00:00.208) 0:00:03.038 ********* 2026-03-31 02:02:03.297790 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:02:03.297800 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:02:03.297809 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:02:03.297822 | orchestrator | 2026-03-31 02:02:03.297833 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-31 02:02:03.297846 | orchestrator | Tuesday 31 March 2026 02:01:25 +0000 (0:00:00.228) 0:00:03.267 ********* 2026-03-31 02:02:03.297858 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 02:02:03.297871 | orchestrator | 2026-03-31 02:02:03.297883 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-31 02:02:03.297895 | orchestrator | Tuesday 31 March 2026 02:01:25 +0000 (0:00:00.142) 0:00:03.410 ********* 2026-03-31 02:02:03.297906 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:02:03.297917 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:02:03.297928 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:02:03.298083 | orchestrator | 2026-03-31 02:02:03.298096 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-31 02:02:03.298108 | orchestrator | Tuesday 31 March 2026 02:01:25 +0000 (0:00:00.429) 0:00:03.840 ********* 2026-03-31 02:02:03.298119 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:02:03.298131 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:02:03.298143 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:02:03.298154 | orchestrator | 2026-03-31 02:02:03.298166 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-31 02:02:03.298177 | orchestrator | Tuesday 31 March 2026 02:01:25 +0000 (0:00:00.140) 0:00:03.980 ********* 2026-03-31 02:02:03.298189 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:02:03.298199 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:02:03.298209 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:02:03.298218 | orchestrator | 2026-03-31 02:02:03.298228 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-31 02:02:03.298247 | orchestrator | Tuesday 31 March 2026 02:01:26 +0000 (0:00:01.071) 0:00:05.052 ********* 2026-03-31 02:02:03.298257 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:02:03.298266 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:02:03.298276 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:02:03.298285 | orchestrator | 2026-03-31 02:02:03.298295 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-31 02:02:03.298305 | orchestrator | Tuesday 31 March 2026 02:01:27 +0000 (0:00:00.531) 0:00:05.583 ********* 2026-03-31 02:02:03.298315 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:02:03.298324 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:02:03.298334 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:02:03.298344 | orchestrator | 2026-03-31 02:02:03.298399 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-31 02:02:03.298411 | orchestrator | Tuesday 31 March 2026 02:01:28 +0000 (0:00:01.063) 0:00:06.647 ********* 2026-03-31 02:02:03.298421 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:02:03.298430 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:02:03.298440 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:02:03.298450 | orchestrator | 2026-03-31 02:02:03.298459 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-03-31 02:02:03.298469 | orchestrator | Tuesday 31 March 2026 02:01:45 +0000 (0:00:16.799) 0:00:23.446 ********* 2026-03-31 02:02:03.298478 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:02:03.298488 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:02:03.298498 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:02:03.298507 | orchestrator | 2026-03-31 02:02:03.298517 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-03-31 02:02:03.298545 | orchestrator | Tuesday 31 March 2026 02:01:45 +0000 (0:00:00.137) 0:00:23.584 ********* 2026-03-31 02:02:03.298555 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:02:03.298565 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:02:03.298574 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:02:03.298584 | orchestrator | 2026-03-31 02:02:03.298594 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-31 02:02:03.298603 | orchestrator | Tuesday 31 March 2026 02:01:53 +0000 (0:00:07.993) 0:00:31.577 ********* 2026-03-31 02:02:03.298613 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:02:03.298623 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:02:03.298632 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:02:03.298642 | orchestrator | 2026-03-31 02:02:03.298652 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-31 02:02:03.298662 | orchestrator | Tuesday 31 March 2026 02:01:53 +0000 (0:00:00.464) 0:00:32.041 ********* 2026-03-31 02:02:03.298672 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-03-31 02:02:03.298682 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-03-31 02:02:03.298691 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-03-31 02:02:03.298701 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-03-31 02:02:03.298716 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-03-31 02:02:03.298726 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-03-31 02:02:03.298735 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-03-31 02:02:03.298744 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-03-31 02:02:03.298754 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-03-31 02:02:03.298764 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-03-31 02:02:03.298773 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-03-31 02:02:03.298783 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-03-31 02:02:03.298792 | orchestrator | 2026-03-31 02:02:03.298802 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-31 02:02:03.298819 | orchestrator | Tuesday 31 March 2026 02:01:57 +0000 (0:00:03.562) 0:00:35.603 ********* 2026-03-31 02:02:03.298828 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:02:03.298838 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:02:03.298848 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:02:03.298858 | orchestrator | 2026-03-31 02:02:03.298868 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-31 02:02:03.298877 | orchestrator | 2026-03-31 02:02:03.298887 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-31 02:02:03.298897 | orchestrator | Tuesday 31 March 2026 02:01:58 +0000 (0:00:01.242) 0:00:36.846 ********* 2026-03-31 02:02:03.298907 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:02:03.298916 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:02:03.298926 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:02:03.298959 | orchestrator | ok: [testbed-manager] 2026-03-31 02:02:03.298969 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:02:03.298979 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:02:03.298988 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:02:03.298998 | orchestrator | 2026-03-31 02:02:03.299007 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 02:02:03.299018 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 02:02:03.299028 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 02:02:03.299039 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 02:02:03.299048 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 02:02:03.299058 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 02:02:03.299068 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 02:02:03.299078 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 02:02:03.299088 | orchestrator | 2026-03-31 02:02:03.299097 | orchestrator | 2026-03-31 02:02:03.299121 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 02:02:03.299131 | orchestrator | Tuesday 31 March 2026 02:02:03 +0000 (0:00:04.541) 0:00:41.387 ********* 2026-03-31 02:02:03.299151 | orchestrator | =============================================================================== 2026-03-31 02:02:03.299161 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.80s 2026-03-31 02:02:03.299171 | orchestrator | Install required packages (Debian) -------------------------------------- 7.99s 2026-03-31 02:02:03.299181 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.54s 2026-03-31 02:02:03.299190 | orchestrator | Copy fact files --------------------------------------------------------- 3.56s 2026-03-31 02:02:03.299200 | orchestrator | Create custom facts directory ------------------------------------------- 1.42s 2026-03-31 02:02:03.299210 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.24s 2026-03-31 02:02:03.299225 | orchestrator | Copy fact file ---------------------------------------------------------- 1.22s 2026-03-31 02:02:03.570570 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.07s 2026-03-31 02:02:03.570645 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.06s 2026-03-31 02:02:03.570651 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.53s 2026-03-31 02:02:03.570675 | orchestrator | Create custom facts directory ------------------------------------------- 0.46s 2026-03-31 02:02:03.570680 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.43s 2026-03-31 02:02:03.570685 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.23s 2026-03-31 02:02:03.570690 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.21s 2026-03-31 02:02:03.570695 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.14s 2026-03-31 02:02:03.570700 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.14s 2026-03-31 02:02:03.570705 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.14s 2026-03-31 02:02:03.570721 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2026-03-31 02:02:03.907117 | orchestrator | + osism apply bootstrap 2026-03-31 02:02:16.009385 | orchestrator | 2026-03-31 02:02:16 | INFO  | Task 4677a954-ed33-4833-9ec3-5308f4700f5f (bootstrap) was prepared for execution. 2026-03-31 02:02:16.009501 | orchestrator | 2026-03-31 02:02:16 | INFO  | It takes a moment until task 4677a954-ed33-4833-9ec3-5308f4700f5f (bootstrap) has been started and output is visible here. 2026-03-31 02:02:32.845715 | orchestrator | 2026-03-31 02:02:32.845873 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-03-31 02:02:32.845911 | orchestrator | 2026-03-31 02:02:32.845932 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-03-31 02:02:32.845978 | orchestrator | Tuesday 31 March 2026 02:02:20 +0000 (0:00:00.168) 0:00:00.168 ********* 2026-03-31 02:02:32.845991 | orchestrator | ok: [testbed-manager] 2026-03-31 02:02:32.846003 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:02:32.846014 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:02:32.846079 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:02:32.846091 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:02:32.846101 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:02:32.846112 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:02:32.846124 | orchestrator | 2026-03-31 02:02:32.846135 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-31 02:02:32.846146 | orchestrator | 2026-03-31 02:02:32.846157 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-31 02:02:32.846168 | orchestrator | Tuesday 31 March 2026 02:02:20 +0000 (0:00:00.314) 0:00:00.482 ********* 2026-03-31 02:02:32.846178 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:02:32.846189 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:02:32.846200 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:02:32.846211 | orchestrator | ok: [testbed-manager] 2026-03-31 02:02:32.846222 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:02:32.846232 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:02:32.846243 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:02:32.846256 | orchestrator | 2026-03-31 02:02:32.846268 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-03-31 02:02:32.846281 | orchestrator | 2026-03-31 02:02:32.846293 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-31 02:02:32.846305 | orchestrator | Tuesday 31 March 2026 02:02:24 +0000 (0:00:03.848) 0:00:04.331 ********* 2026-03-31 02:02:32.846319 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-31 02:02:32.846332 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-31 02:02:32.846345 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-03-31 02:02:32.846357 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-31 02:02:32.846370 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-31 02:02:32.846382 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-31 02:02:32.846395 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-03-31 02:02:32.846408 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-31 02:02:32.846424 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-31 02:02:32.846476 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-31 02:02:32.846497 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-31 02:02:32.846513 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-03-31 02:02:32.846525 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-31 02:02:32.846538 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-31 02:02:32.846551 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-31 02:02:32.846565 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-31 02:02:32.846578 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-31 02:02:32.846589 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-31 02:02:32.846600 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-31 02:02:32.846611 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:02:32.846622 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-31 02:02:32.846632 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-31 02:02:32.846643 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-03-31 02:02:32.846653 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-31 02:02:32.846664 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-31 02:02:32.846674 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-03-31 02:02:32.846685 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-31 02:02:32.846696 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-31 02:02:32.846706 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-31 02:02:32.846717 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-31 02:02:32.846728 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:02:32.846738 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-31 02:02:32.846749 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-31 02:02:32.846759 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-31 02:02:32.846770 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:02:32.846781 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-31 02:02:32.846791 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:02:32.846802 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-31 02:02:32.846812 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-31 02:02:32.846823 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-31 02:02:32.846834 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-03-31 02:02:32.846845 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-31 02:02:32.846855 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-31 02:02:32.846866 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-31 02:02:32.846877 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-31 02:02:32.846894 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-31 02:02:32.846937 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-31 02:02:32.847019 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:02:32.847039 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-31 02:02:32.847058 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-31 02:02:32.847069 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:02:32.847080 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-31 02:02:32.847091 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-31 02:02:32.847101 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-31 02:02:32.847141 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-31 02:02:32.847153 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:02:32.847164 | orchestrator | 2026-03-31 02:02:32.847175 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-03-31 02:02:32.847186 | orchestrator | 2026-03-31 02:02:32.847197 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-03-31 02:02:32.847208 | orchestrator | Tuesday 31 March 2026 02:02:25 +0000 (0:00:00.546) 0:00:04.878 ********* 2026-03-31 02:02:32.847219 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:02:32.847230 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:02:32.847241 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:02:32.847251 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:02:32.847262 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:02:32.847273 | orchestrator | ok: [testbed-manager] 2026-03-31 02:02:32.847284 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:02:32.847295 | orchestrator | 2026-03-31 02:02:32.847307 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-03-31 02:02:32.847318 | orchestrator | Tuesday 31 March 2026 02:02:26 +0000 (0:00:01.240) 0:00:06.119 ********* 2026-03-31 02:02:32.847329 | orchestrator | ok: [testbed-manager] 2026-03-31 02:02:32.847348 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:02:32.847366 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:02:32.847385 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:02:32.847403 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:02:32.847418 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:02:32.847429 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:02:32.847439 | orchestrator | 2026-03-31 02:02:32.847450 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-03-31 02:02:32.847461 | orchestrator | Tuesday 31 March 2026 02:02:27 +0000 (0:00:01.298) 0:00:07.418 ********* 2026-03-31 02:02:32.847473 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:02:32.847486 | orchestrator | 2026-03-31 02:02:32.847497 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-03-31 02:02:32.847508 | orchestrator | Tuesday 31 March 2026 02:02:28 +0000 (0:00:00.321) 0:00:07.739 ********* 2026-03-31 02:02:32.847518 | orchestrator | changed: [testbed-manager] 2026-03-31 02:02:32.847529 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:02:32.847540 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:02:32.847550 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:02:32.847561 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:02:32.847572 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:02:32.847583 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:02:32.847594 | orchestrator | 2026-03-31 02:02:32.847604 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-03-31 02:02:32.847615 | orchestrator | Tuesday 31 March 2026 02:02:30 +0000 (0:00:02.176) 0:00:09.915 ********* 2026-03-31 02:02:32.847626 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:02:32.847638 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:02:32.847651 | orchestrator | 2026-03-31 02:02:32.847662 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-03-31 02:02:32.847673 | orchestrator | Tuesday 31 March 2026 02:02:30 +0000 (0:00:00.277) 0:00:10.193 ********* 2026-03-31 02:02:32.847684 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:02:32.847694 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:02:32.847705 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:02:32.847716 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:02:32.847726 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:02:32.847737 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:02:32.847757 | orchestrator | 2026-03-31 02:02:32.847768 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-03-31 02:02:32.847779 | orchestrator | Tuesday 31 March 2026 02:02:31 +0000 (0:00:01.051) 0:00:11.245 ********* 2026-03-31 02:02:32.847789 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:02:32.847800 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:02:32.847811 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:02:32.847822 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:02:32.847832 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:02:32.847843 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:02:32.847853 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:02:32.847864 | orchestrator | 2026-03-31 02:02:32.847875 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-03-31 02:02:32.847886 | orchestrator | Tuesday 31 March 2026 02:02:32 +0000 (0:00:00.658) 0:00:11.903 ********* 2026-03-31 02:02:32.847896 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:02:32.847907 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:02:32.847918 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:02:32.847933 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:02:32.847944 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:02:32.847979 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:02:32.847990 | orchestrator | ok: [testbed-manager] 2026-03-31 02:02:32.848000 | orchestrator | 2026-03-31 02:02:32.848011 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-31 02:02:32.848023 | orchestrator | Tuesday 31 March 2026 02:02:32 +0000 (0:00:00.435) 0:00:12.339 ********* 2026-03-31 02:02:32.848034 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:02:32.848045 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:02:32.848064 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:02:44.925340 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:02:44.925473 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:02:44.925493 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:02:44.925512 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:02:44.925529 | orchestrator | 2026-03-31 02:02:44.925546 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-31 02:02:44.925565 | orchestrator | Tuesday 31 March 2026 02:02:32 +0000 (0:00:00.254) 0:00:12.594 ********* 2026-03-31 02:02:44.925587 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:02:44.925630 | orchestrator | 2026-03-31 02:02:44.925649 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-31 02:02:44.925670 | orchestrator | Tuesday 31 March 2026 02:02:33 +0000 (0:00:00.353) 0:00:12.948 ********* 2026-03-31 02:02:44.925689 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:02:44.925709 | orchestrator | 2026-03-31 02:02:44.925728 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-31 02:02:44.925747 | orchestrator | Tuesday 31 March 2026 02:02:33 +0000 (0:00:00.306) 0:00:13.254 ********* 2026-03-31 02:02:44.925766 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:02:44.925785 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:02:44.925797 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:02:44.925808 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:02:44.925819 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:02:44.925834 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:02:44.925847 | orchestrator | ok: [testbed-manager] 2026-03-31 02:02:44.925859 | orchestrator | 2026-03-31 02:02:44.925872 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-31 02:02:44.925885 | orchestrator | Tuesday 31 March 2026 02:02:34 +0000 (0:00:01.354) 0:00:14.609 ********* 2026-03-31 02:02:44.925933 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:02:44.925954 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:02:44.926115 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:02:44.926126 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:02:44.926137 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:02:44.926154 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:02:44.926172 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:02:44.926191 | orchestrator | 2026-03-31 02:02:44.926210 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-31 02:02:44.926229 | orchestrator | Tuesday 31 March 2026 02:02:35 +0000 (0:00:00.259) 0:00:14.868 ********* 2026-03-31 02:02:44.926241 | orchestrator | ok: [testbed-manager] 2026-03-31 02:02:44.926252 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:02:44.926263 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:02:44.926273 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:02:44.926284 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:02:44.926294 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:02:44.926305 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:02:44.926316 | orchestrator | 2026-03-31 02:02:44.926327 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-31 02:02:44.926338 | orchestrator | Tuesday 31 March 2026 02:02:35 +0000 (0:00:00.551) 0:00:15.420 ********* 2026-03-31 02:02:44.926348 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:02:44.926359 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:02:44.926370 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:02:44.926381 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:02:44.926391 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:02:44.926402 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:02:44.926413 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:02:44.926424 | orchestrator | 2026-03-31 02:02:44.926435 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-31 02:02:44.926447 | orchestrator | Tuesday 31 March 2026 02:02:36 +0000 (0:00:00.382) 0:00:15.802 ********* 2026-03-31 02:02:44.926457 | orchestrator | ok: [testbed-manager] 2026-03-31 02:02:44.926468 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:02:44.926478 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:02:44.926489 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:02:44.926500 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:02:44.926511 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:02:44.926521 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:02:44.926532 | orchestrator | 2026-03-31 02:02:44.926544 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-31 02:02:44.926563 | orchestrator | Tuesday 31 March 2026 02:02:36 +0000 (0:00:00.526) 0:00:16.328 ********* 2026-03-31 02:02:44.926582 | orchestrator | ok: [testbed-manager] 2026-03-31 02:02:44.926601 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:02:44.926619 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:02:44.926638 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:02:44.926649 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:02:44.926660 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:02:44.926670 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:02:44.926681 | orchestrator | 2026-03-31 02:02:44.926692 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-31 02:02:44.926703 | orchestrator | Tuesday 31 March 2026 02:02:37 +0000 (0:00:01.123) 0:00:17.452 ********* 2026-03-31 02:02:44.926713 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:02:44.926743 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:02:44.926763 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:02:44.926782 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:02:44.926801 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:02:44.926819 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:02:44.926831 | orchestrator | ok: [testbed-manager] 2026-03-31 02:02:44.926841 | orchestrator | 2026-03-31 02:02:44.926852 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-31 02:02:44.926876 | orchestrator | Tuesday 31 March 2026 02:02:38 +0000 (0:00:01.021) 0:00:18.473 ********* 2026-03-31 02:02:44.926917 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:02:44.926938 | orchestrator | 2026-03-31 02:02:44.926985 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-31 02:02:44.926999 | orchestrator | Tuesday 31 March 2026 02:02:39 +0000 (0:00:00.329) 0:00:18.803 ********* 2026-03-31 02:02:44.927010 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:02:44.927021 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:02:44.927032 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:02:44.927043 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:02:44.927053 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:02:44.927066 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:02:44.927085 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:02:44.927105 | orchestrator | 2026-03-31 02:02:44.927125 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-31 02:02:44.927145 | orchestrator | Tuesday 31 March 2026 02:02:40 +0000 (0:00:01.256) 0:00:20.059 ********* 2026-03-31 02:02:44.927164 | orchestrator | ok: [testbed-manager] 2026-03-31 02:02:44.927183 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:02:44.927201 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:02:44.927218 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:02:44.927237 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:02:44.927256 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:02:44.927276 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:02:44.927295 | orchestrator | 2026-03-31 02:02:44.927307 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-31 02:02:44.927319 | orchestrator | Tuesday 31 March 2026 02:02:40 +0000 (0:00:00.241) 0:00:20.301 ********* 2026-03-31 02:02:44.927330 | orchestrator | ok: [testbed-manager] 2026-03-31 02:02:44.927341 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:02:44.927352 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:02:44.927362 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:02:44.927379 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:02:44.927398 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:02:44.927416 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:02:44.927434 | orchestrator | 2026-03-31 02:02:44.927454 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-31 02:02:44.927473 | orchestrator | Tuesday 31 March 2026 02:02:40 +0000 (0:00:00.247) 0:00:20.548 ********* 2026-03-31 02:02:44.927491 | orchestrator | ok: [testbed-manager] 2026-03-31 02:02:44.927510 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:02:44.927530 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:02:44.927548 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:02:44.927566 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:02:44.927586 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:02:44.927604 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:02:44.927623 | orchestrator | 2026-03-31 02:02:44.927642 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-31 02:02:44.927653 | orchestrator | Tuesday 31 March 2026 02:02:41 +0000 (0:00:00.241) 0:00:20.790 ********* 2026-03-31 02:02:44.927665 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:02:44.927678 | orchestrator | 2026-03-31 02:02:44.927696 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-31 02:02:44.927714 | orchestrator | Tuesday 31 March 2026 02:02:41 +0000 (0:00:00.307) 0:00:21.098 ********* 2026-03-31 02:02:44.927734 | orchestrator | ok: [testbed-manager] 2026-03-31 02:02:44.927753 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:02:44.927787 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:02:44.927807 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:02:44.927825 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:02:44.927844 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:02:44.927863 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:02:44.927881 | orchestrator | 2026-03-31 02:02:44.927895 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-31 02:02:44.927906 | orchestrator | Tuesday 31 March 2026 02:02:41 +0000 (0:00:00.499) 0:00:21.597 ********* 2026-03-31 02:02:44.927917 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:02:44.927928 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:02:44.927939 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:02:44.927950 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:02:44.927990 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:02:44.928002 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:02:44.928013 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:02:44.928023 | orchestrator | 2026-03-31 02:02:44.928034 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-31 02:02:44.928045 | orchestrator | Tuesday 31 March 2026 02:02:42 +0000 (0:00:00.237) 0:00:21.834 ********* 2026-03-31 02:02:44.928056 | orchestrator | ok: [testbed-manager] 2026-03-31 02:02:44.928067 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:02:44.928077 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:02:44.928088 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:02:44.928099 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:02:44.928110 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:02:44.928120 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:02:44.928131 | orchestrator | 2026-03-31 02:02:44.928142 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-31 02:02:44.928152 | orchestrator | Tuesday 31 March 2026 02:02:43 +0000 (0:00:01.080) 0:00:22.915 ********* 2026-03-31 02:02:44.928163 | orchestrator | ok: [testbed-manager] 2026-03-31 02:02:44.928177 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:02:44.928196 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:02:44.928224 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:02:44.928243 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:02:44.928259 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:02:44.928276 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:02:44.928294 | orchestrator | 2026-03-31 02:02:44.928311 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-31 02:02:44.928328 | orchestrator | Tuesday 31 March 2026 02:02:43 +0000 (0:00:00.569) 0:00:23.485 ********* 2026-03-31 02:02:44.928343 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:02:44.928359 | orchestrator | ok: [testbed-manager] 2026-03-31 02:02:44.928389 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:02:44.928408 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:02:44.928440 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:03:26.444476 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:03:26.444612 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:03:26.444627 | orchestrator | 2026-03-31 02:03:26.444644 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-31 02:03:26.444661 | orchestrator | Tuesday 31 March 2026 02:02:44 +0000 (0:00:01.079) 0:00:24.564 ********* 2026-03-31 02:03:26.444677 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:03:26.444694 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:03:26.444707 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:03:26.444722 | orchestrator | changed: [testbed-manager] 2026-03-31 02:03:26.444737 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:03:26.444753 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:03:26.444768 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:03:26.444784 | orchestrator | 2026-03-31 02:03:26.444800 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-03-31 02:03:26.444816 | orchestrator | Tuesday 31 March 2026 02:03:00 +0000 (0:00:15.706) 0:00:40.270 ********* 2026-03-31 02:03:26.444832 | orchestrator | ok: [testbed-manager] 2026-03-31 02:03:26.444876 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:03:26.444886 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:03:26.444896 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:03:26.444906 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:03:26.444915 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:03:26.444925 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:03:26.444935 | orchestrator | 2026-03-31 02:03:26.444945 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-03-31 02:03:26.444955 | orchestrator | Tuesday 31 March 2026 02:03:00 +0000 (0:00:00.247) 0:00:40.518 ********* 2026-03-31 02:03:26.444965 | orchestrator | ok: [testbed-manager] 2026-03-31 02:03:26.444975 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:03:26.445045 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:03:26.445061 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:03:26.445075 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:03:26.445086 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:03:26.445096 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:03:26.445110 | orchestrator | 2026-03-31 02:03:26.445124 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-03-31 02:03:26.445136 | orchestrator | Tuesday 31 March 2026 02:03:01 +0000 (0:00:00.243) 0:00:40.762 ********* 2026-03-31 02:03:26.445148 | orchestrator | ok: [testbed-manager] 2026-03-31 02:03:26.445170 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:03:26.445185 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:03:26.445199 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:03:26.445213 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:03:26.445227 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:03:26.445241 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:03:26.445255 | orchestrator | 2026-03-31 02:03:26.445271 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-03-31 02:03:26.445286 | orchestrator | Tuesday 31 March 2026 02:03:01 +0000 (0:00:00.234) 0:00:40.996 ********* 2026-03-31 02:03:26.445302 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:03:26.445320 | orchestrator | 2026-03-31 02:03:26.445335 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-03-31 02:03:26.445350 | orchestrator | Tuesday 31 March 2026 02:03:01 +0000 (0:00:00.332) 0:00:41.328 ********* 2026-03-31 02:03:26.445365 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:03:26.445374 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:03:26.445383 | orchestrator | ok: [testbed-manager] 2026-03-31 02:03:26.445391 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:03:26.445400 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:03:26.445408 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:03:26.445417 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:03:26.445425 | orchestrator | 2026-03-31 02:03:26.445434 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-03-31 02:03:26.445443 | orchestrator | Tuesday 31 March 2026 02:03:03 +0000 (0:00:01.585) 0:00:42.914 ********* 2026-03-31 02:03:26.445452 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:03:26.445461 | orchestrator | changed: [testbed-manager] 2026-03-31 02:03:26.445469 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:03:26.445478 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:03:26.445486 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:03:26.445495 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:03:26.445503 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:03:26.445512 | orchestrator | 2026-03-31 02:03:26.445520 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-03-31 02:03:26.445529 | orchestrator | Tuesday 31 March 2026 02:03:04 +0000 (0:00:01.105) 0:00:44.019 ********* 2026-03-31 02:03:26.445538 | orchestrator | ok: [testbed-manager] 2026-03-31 02:03:26.445546 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:03:26.445555 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:03:26.445575 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:03:26.445583 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:03:26.445592 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:03:26.445600 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:03:26.445609 | orchestrator | 2026-03-31 02:03:26.445618 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-03-31 02:03:26.445626 | orchestrator | Tuesday 31 March 2026 02:03:05 +0000 (0:00:00.823) 0:00:44.843 ********* 2026-03-31 02:03:26.445636 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:03:26.445646 | orchestrator | 2026-03-31 02:03:26.445669 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-03-31 02:03:26.445680 | orchestrator | Tuesday 31 March 2026 02:03:05 +0000 (0:00:00.343) 0:00:45.186 ********* 2026-03-31 02:03:26.445688 | orchestrator | changed: [testbed-manager] 2026-03-31 02:03:26.445697 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:03:26.445706 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:03:26.445715 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:03:26.445724 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:03:26.445732 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:03:26.445741 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:03:26.445750 | orchestrator | 2026-03-31 02:03:26.445779 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-03-31 02:03:26.445789 | orchestrator | Tuesday 31 March 2026 02:03:06 +0000 (0:00:01.016) 0:00:46.202 ********* 2026-03-31 02:03:26.445798 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:03:26.445806 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:03:26.445815 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:03:26.445824 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:03:26.445832 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:03:26.445841 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:03:26.445849 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:03:26.445858 | orchestrator | 2026-03-31 02:03:26.445867 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-03-31 02:03:26.445876 | orchestrator | Tuesday 31 March 2026 02:03:06 +0000 (0:00:00.243) 0:00:46.446 ********* 2026-03-31 02:03:26.445885 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:03:26.445894 | orchestrator | 2026-03-31 02:03:26.445902 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-03-31 02:03:26.445968 | orchestrator | Tuesday 31 March 2026 02:03:07 +0000 (0:00:00.326) 0:00:46.773 ********* 2026-03-31 02:03:26.445980 | orchestrator | ok: [testbed-manager] 2026-03-31 02:03:26.446100 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:03:26.446118 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:03:26.446132 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:03:26.446148 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:03:26.446163 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:03:26.446177 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:03:26.446190 | orchestrator | 2026-03-31 02:03:26.446199 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-03-31 02:03:26.446208 | orchestrator | Tuesday 31 March 2026 02:03:08 +0000 (0:00:01.582) 0:00:48.355 ********* 2026-03-31 02:03:26.446217 | orchestrator | changed: [testbed-manager] 2026-03-31 02:03:26.446226 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:03:26.446234 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:03:26.446243 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:03:26.446251 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:03:26.446260 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:03:26.446268 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:03:26.446288 | orchestrator | 2026-03-31 02:03:26.446297 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-03-31 02:03:26.446306 | orchestrator | Tuesday 31 March 2026 02:03:09 +0000 (0:00:01.162) 0:00:49.518 ********* 2026-03-31 02:03:26.446314 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:03:26.446323 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:03:26.446331 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:03:26.446340 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:03:26.446349 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:03:26.446359 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:03:26.446373 | orchestrator | changed: [testbed-manager] 2026-03-31 02:03:26.446387 | orchestrator | 2026-03-31 02:03:26.446401 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-03-31 02:03:26.446416 | orchestrator | Tuesday 31 March 2026 02:03:23 +0000 (0:00:13.516) 0:01:03.034 ********* 2026-03-31 02:03:26.446430 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:03:26.446444 | orchestrator | ok: [testbed-manager] 2026-03-31 02:03:26.446458 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:03:26.446473 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:03:26.446488 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:03:26.446502 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:03:26.446516 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:03:26.446531 | orchestrator | 2026-03-31 02:03:26.446545 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-03-31 02:03:26.446558 | orchestrator | Tuesday 31 March 2026 02:03:24 +0000 (0:00:01.306) 0:01:04.340 ********* 2026-03-31 02:03:26.446567 | orchestrator | ok: [testbed-manager] 2026-03-31 02:03:26.446576 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:03:26.446584 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:03:26.446593 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:03:26.446601 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:03:26.446609 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:03:26.446618 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:03:26.446626 | orchestrator | 2026-03-31 02:03:26.446635 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-03-31 02:03:26.446643 | orchestrator | Tuesday 31 March 2026 02:03:25 +0000 (0:00:00.920) 0:01:05.261 ********* 2026-03-31 02:03:26.446652 | orchestrator | ok: [testbed-manager] 2026-03-31 02:03:26.446660 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:03:26.446669 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:03:26.446677 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:03:26.446685 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:03:26.446694 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:03:26.446702 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:03:26.446711 | orchestrator | 2026-03-31 02:03:26.446720 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-03-31 02:03:26.446729 | orchestrator | Tuesday 31 March 2026 02:03:25 +0000 (0:00:00.237) 0:01:05.498 ********* 2026-03-31 02:03:26.446737 | orchestrator | ok: [testbed-manager] 2026-03-31 02:03:26.446746 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:03:26.446754 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:03:26.446763 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:03:26.446771 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:03:26.446780 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:03:26.446788 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:03:26.446797 | orchestrator | 2026-03-31 02:03:26.446867 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-03-31 02:03:26.446877 | orchestrator | Tuesday 31 March 2026 02:03:26 +0000 (0:00:00.262) 0:01:05.761 ********* 2026-03-31 02:03:26.446887 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:03:26.446897 | orchestrator | 2026-03-31 02:03:26.446917 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-03-31 02:05:48.925238 | orchestrator | Tuesday 31 March 2026 02:03:26 +0000 (0:00:00.323) 0:01:06.085 ********* 2026-03-31 02:05:48.925344 | orchestrator | ok: [testbed-manager] 2026-03-31 02:05:48.925355 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:05:48.925363 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:05:48.925370 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:05:48.925377 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:05:48.925385 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:05:48.925391 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:05:48.925398 | orchestrator | 2026-03-31 02:05:48.925406 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-03-31 02:05:48.925413 | orchestrator | Tuesday 31 March 2026 02:03:28 +0000 (0:00:01.717) 0:01:07.802 ********* 2026-03-31 02:05:48.925420 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:05:48.925428 | orchestrator | changed: [testbed-manager] 2026-03-31 02:05:48.925435 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:05:48.925441 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:05:48.925448 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:05:48.925455 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:05:48.925462 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:05:48.925468 | orchestrator | 2026-03-31 02:05:48.925475 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-03-31 02:05:48.925482 | orchestrator | Tuesday 31 March 2026 02:03:28 +0000 (0:00:00.605) 0:01:08.408 ********* 2026-03-31 02:05:48.925489 | orchestrator | ok: [testbed-manager] 2026-03-31 02:05:48.925496 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:05:48.925503 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:05:48.925509 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:05:48.925516 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:05:48.925522 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:05:48.925529 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:05:48.925536 | orchestrator | 2026-03-31 02:05:48.925543 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-03-31 02:05:48.925550 | orchestrator | Tuesday 31 March 2026 02:03:28 +0000 (0:00:00.242) 0:01:08.650 ********* 2026-03-31 02:05:48.925556 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:05:48.925563 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:05:48.925570 | orchestrator | ok: [testbed-manager] 2026-03-31 02:05:48.925576 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:05:48.925582 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:05:48.925588 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:05:48.925594 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:05:48.925600 | orchestrator | 2026-03-31 02:05:48.925606 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-03-31 02:05:48.925613 | orchestrator | Tuesday 31 March 2026 02:03:30 +0000 (0:00:01.250) 0:01:09.900 ********* 2026-03-31 02:05:48.925619 | orchestrator | changed: [testbed-manager] 2026-03-31 02:05:48.925625 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:05:48.925632 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:05:48.925638 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:05:48.925644 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:05:48.925650 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:05:48.925656 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:05:48.925662 | orchestrator | 2026-03-31 02:05:48.925672 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-03-31 02:05:48.925679 | orchestrator | Tuesday 31 March 2026 02:03:31 +0000 (0:00:01.676) 0:01:11.576 ********* 2026-03-31 02:05:48.925686 | orchestrator | ok: [testbed-manager] 2026-03-31 02:05:48.925692 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:05:48.925698 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:05:48.925704 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:05:48.925710 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:05:48.925716 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:05:48.925722 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:05:48.925728 | orchestrator | 2026-03-31 02:05:48.925734 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-03-31 02:05:48.925761 | orchestrator | Tuesday 31 March 2026 02:03:34 +0000 (0:00:02.548) 0:01:14.125 ********* 2026-03-31 02:05:48.925768 | orchestrator | ok: [testbed-manager] 2026-03-31 02:05:48.925773 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:05:48.925779 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:05:48.925785 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:05:48.925791 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:05:48.925798 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:05:48.925806 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:05:48.925813 | orchestrator | 2026-03-31 02:05:48.925819 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-03-31 02:05:48.925825 | orchestrator | Tuesday 31 March 2026 02:04:13 +0000 (0:00:38.876) 0:01:53.002 ********* 2026-03-31 02:05:48.925832 | orchestrator | changed: [testbed-manager] 2026-03-31 02:05:48.925838 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:05:48.925845 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:05:48.925851 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:05:48.925857 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:05:48.925863 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:05:48.925869 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:05:48.925876 | orchestrator | 2026-03-31 02:05:48.925882 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-03-31 02:05:48.925888 | orchestrator | Tuesday 31 March 2026 02:05:32 +0000 (0:01:18.753) 0:03:11.755 ********* 2026-03-31 02:05:48.925894 | orchestrator | ok: [testbed-manager] 2026-03-31 02:05:48.925901 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:05:48.925908 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:05:48.925915 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:05:48.925923 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:05:48.925930 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:05:48.925937 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:05:48.925944 | orchestrator | 2026-03-31 02:05:48.925950 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-03-31 02:05:48.925957 | orchestrator | Tuesday 31 March 2026 02:05:33 +0000 (0:00:01.808) 0:03:13.563 ********* 2026-03-31 02:05:48.925964 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:05:48.925970 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:05:48.925976 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:05:48.925982 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:05:48.925989 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:05:48.925995 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:05:48.926001 | orchestrator | changed: [testbed-manager] 2026-03-31 02:05:48.926007 | orchestrator | 2026-03-31 02:05:48.926061 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-03-31 02:05:48.926069 | orchestrator | Tuesday 31 March 2026 02:05:47 +0000 (0:00:13.630) 0:03:27.194 ********* 2026-03-31 02:05:48.926122 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-03-31 02:05:48.926146 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-03-31 02:05:48.926162 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-03-31 02:05:48.926169 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-31 02:05:48.926176 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-31 02:05:48.926183 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-03-31 02:05:48.926190 | orchestrator | 2026-03-31 02:05:48.926197 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-03-31 02:05:48.926203 | orchestrator | Tuesday 31 March 2026 02:05:48 +0000 (0:00:00.519) 0:03:27.713 ********* 2026-03-31 02:05:48.926209 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-31 02:05:48.926215 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-31 02:05:48.926221 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:05:48.926227 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:05:48.926234 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-31 02:05:48.926240 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-31 02:05:48.926247 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:05:48.926253 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:05:48.926259 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-31 02:05:48.926265 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-31 02:05:48.926271 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-31 02:05:48.926277 | orchestrator | 2026-03-31 02:05:48.926283 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-03-31 02:05:48.926289 | orchestrator | Tuesday 31 March 2026 02:05:48 +0000 (0:00:00.768) 0:03:28.481 ********* 2026-03-31 02:05:48.926299 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-31 02:05:48.926307 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-31 02:05:48.926312 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-31 02:05:48.926318 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-31 02:05:48.926324 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-31 02:05:48.926335 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-31 02:05:56.643406 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-31 02:05:56.643542 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-31 02:05:56.643607 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-31 02:05:56.643634 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-31 02:05:56.643653 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-31 02:05:56.643671 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-31 02:05:56.643689 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-31 02:05:56.643705 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-31 02:05:56.643721 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:05:56.643740 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-31 02:05:56.643758 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-31 02:05:56.643776 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-31 02:05:56.643793 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-31 02:05:56.643812 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-31 02:05:56.643831 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-31 02:05:56.643849 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-31 02:05:56.643869 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-31 02:05:56.643881 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-31 02:05:56.643892 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-31 02:05:56.643903 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-31 02:05:56.643914 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-31 02:05:56.643925 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-31 02:05:56.643943 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-31 02:05:56.643969 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-31 02:05:56.643989 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-31 02:05:56.644007 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:05:56.644025 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:05:56.644042 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-31 02:05:56.644060 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-31 02:05:56.644078 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-31 02:05:56.644126 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-31 02:05:56.644146 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-31 02:05:56.644158 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-31 02:05:56.644168 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-31 02:05:56.644179 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-31 02:05:56.644191 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-31 02:05:56.644216 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-31 02:05:56.644228 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:05:56.644254 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-31 02:05:56.644266 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-31 02:05:56.644276 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-31 02:05:56.644287 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-31 02:05:56.644298 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-31 02:05:56.644331 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-31 02:05:56.644343 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-31 02:05:56.644354 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-31 02:05:56.644365 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-31 02:05:56.644376 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-31 02:05:56.644387 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-31 02:05:56.644398 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-31 02:05:56.644409 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-31 02:05:56.644419 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-31 02:05:56.644431 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-31 02:05:56.644441 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-31 02:05:56.644452 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-31 02:05:56.644463 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-31 02:05:56.644474 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-31 02:05:56.644484 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-31 02:05:56.644495 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-31 02:05:56.644506 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-31 02:05:56.644516 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-31 02:05:56.644527 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-31 02:05:56.644546 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-31 02:05:56.644573 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-31 02:05:56.644593 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-31 02:05:56.644611 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-31 02:05:56.644628 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-31 02:05:56.644646 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-31 02:05:56.644678 | orchestrator | 2026-03-31 02:05:56.644699 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-03-31 02:05:56.644718 | orchestrator | Tuesday 31 March 2026 02:05:53 +0000 (0:00:04.712) 0:03:33.193 ********* 2026-03-31 02:05:56.644737 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-31 02:05:56.644752 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-31 02:05:56.644763 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-31 02:05:56.644774 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-31 02:05:56.644785 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-31 02:05:56.644796 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-31 02:05:56.644806 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-31 02:05:56.644817 | orchestrator | 2026-03-31 02:05:56.644828 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-03-31 02:05:56.644839 | orchestrator | Tuesday 31 March 2026 02:05:55 +0000 (0:00:01.571) 0:03:34.765 ********* 2026-03-31 02:05:56.644849 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-31 02:05:56.644860 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:05:56.644871 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-31 02:05:56.644882 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:05:56.644900 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-31 02:05:56.644911 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:05:56.644922 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-31 02:05:56.644939 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:05:56.644966 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-31 02:05:56.644988 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-31 02:05:56.645017 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-31 02:06:10.596268 | orchestrator | 2026-03-31 02:06:10.596380 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-03-31 02:06:10.596395 | orchestrator | Tuesday 31 March 2026 02:05:56 +0000 (0:00:01.514) 0:03:36.279 ********* 2026-03-31 02:06:10.596406 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-31 02:06:10.596417 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-31 02:06:10.596427 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:06:10.596439 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-31 02:06:10.596448 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:06:10.596458 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-31 02:06:10.596468 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:06:10.596478 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:06:10.596487 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-31 02:06:10.596497 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-31 02:06:10.596507 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-31 02:06:10.596517 | orchestrator | 2026-03-31 02:06:10.596526 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-03-31 02:06:10.596555 | orchestrator | Tuesday 31 March 2026 02:05:57 +0000 (0:00:00.675) 0:03:36.955 ********* 2026-03-31 02:06:10.596565 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-31 02:06:10.596575 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:06:10.596585 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-31 02:06:10.596594 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:06:10.596604 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-31 02:06:10.596614 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:06:10.596624 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-31 02:06:10.596633 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:06:10.596643 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-31 02:06:10.596653 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-31 02:06:10.596663 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-31 02:06:10.596673 | orchestrator | 2026-03-31 02:06:10.596683 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-03-31 02:06:10.596695 | orchestrator | Tuesday 31 March 2026 02:05:57 +0000 (0:00:00.593) 0:03:37.548 ********* 2026-03-31 02:06:10.596706 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:06:10.596718 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:06:10.596730 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:06:10.596741 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:06:10.596753 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:06:10.596763 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:06:10.596772 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:06:10.596782 | orchestrator | 2026-03-31 02:06:10.596791 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-03-31 02:06:10.596801 | orchestrator | Tuesday 31 March 2026 02:05:58 +0000 (0:00:00.329) 0:03:37.877 ********* 2026-03-31 02:06:10.596811 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:06:10.596821 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:06:10.596830 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:06:10.596839 | orchestrator | ok: [testbed-manager] 2026-03-31 02:06:10.596849 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:06:10.596858 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:06:10.596868 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:06:10.596877 | orchestrator | 2026-03-31 02:06:10.596887 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-03-31 02:06:10.596896 | orchestrator | Tuesday 31 March 2026 02:06:04 +0000 (0:00:06.000) 0:03:43.878 ********* 2026-03-31 02:06:10.596906 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-03-31 02:06:10.596916 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-03-31 02:06:10.596926 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:06:10.596935 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-03-31 02:06:10.596945 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:06:10.596955 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-03-31 02:06:10.596964 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:06:10.596974 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-03-31 02:06:10.596985 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:06:10.596994 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-03-31 02:06:10.597016 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:06:10.597026 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:06:10.597036 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-03-31 02:06:10.597045 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:06:10.597055 | orchestrator | 2026-03-31 02:06:10.597071 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-03-31 02:06:10.597081 | orchestrator | Tuesday 31 March 2026 02:06:04 +0000 (0:00:00.340) 0:03:44.218 ********* 2026-03-31 02:06:10.597090 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-03-31 02:06:10.597121 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-03-31 02:06:10.597131 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-03-31 02:06:10.597157 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-03-31 02:06:10.597167 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-03-31 02:06:10.597177 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-03-31 02:06:10.597186 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-03-31 02:06:10.597196 | orchestrator | 2026-03-31 02:06:10.597205 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-03-31 02:06:10.597215 | orchestrator | Tuesday 31 March 2026 02:06:05 +0000 (0:00:01.123) 0:03:45.341 ********* 2026-03-31 02:06:10.597226 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:06:10.597238 | orchestrator | 2026-03-31 02:06:10.597248 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-03-31 02:06:10.597258 | orchestrator | Tuesday 31 March 2026 02:06:06 +0000 (0:00:00.548) 0:03:45.889 ********* 2026-03-31 02:06:10.597267 | orchestrator | ok: [testbed-manager] 2026-03-31 02:06:10.597277 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:06:10.597286 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:06:10.597296 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:06:10.597305 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:06:10.597315 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:06:10.597324 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:06:10.597333 | orchestrator | 2026-03-31 02:06:10.597343 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-03-31 02:06:10.597353 | orchestrator | Tuesday 31 March 2026 02:06:07 +0000 (0:00:01.408) 0:03:47.298 ********* 2026-03-31 02:06:10.597362 | orchestrator | ok: [testbed-manager] 2026-03-31 02:06:10.597372 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:06:10.597381 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:06:10.597391 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:06:10.597400 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:06:10.597410 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:06:10.597419 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:06:10.597429 | orchestrator | 2026-03-31 02:06:10.597439 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-03-31 02:06:10.597448 | orchestrator | Tuesday 31 March 2026 02:06:08 +0000 (0:00:00.629) 0:03:47.928 ********* 2026-03-31 02:06:10.597458 | orchestrator | changed: [testbed-manager] 2026-03-31 02:06:10.597468 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:06:10.597477 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:06:10.597487 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:06:10.597496 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:06:10.597506 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:06:10.597515 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:06:10.597525 | orchestrator | 2026-03-31 02:06:10.597535 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-03-31 02:06:10.597544 | orchestrator | Tuesday 31 March 2026 02:06:08 +0000 (0:00:00.612) 0:03:48.541 ********* 2026-03-31 02:06:10.597554 | orchestrator | ok: [testbed-manager] 2026-03-31 02:06:10.597564 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:06:10.597573 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:06:10.597583 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:06:10.597592 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:06:10.597602 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:06:10.597611 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:06:10.597621 | orchestrator | 2026-03-31 02:06:10.597630 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-03-31 02:06:10.597646 | orchestrator | Tuesday 31 March 2026 02:06:09 +0000 (0:00:00.693) 0:03:49.235 ********* 2026-03-31 02:06:10.597659 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774921208.7546363, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-31 02:06:10.597672 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774921268.97967, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-31 02:06:10.597687 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774921254.3093276, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-31 02:06:10.597717 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774921268.5855184, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-31 02:06:15.579980 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774921265.1917214, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-31 02:06:15.580079 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774921255.770748, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-31 02:06:15.580094 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774921259.735996, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-31 02:06:15.580158 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-31 02:06:15.580170 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-31 02:06:15.580194 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-31 02:06:15.580204 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-31 02:06:15.580231 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-31 02:06:15.580242 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-31 02:06:15.580252 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-31 02:06:15.580270 | orchestrator | 2026-03-31 02:06:15.580282 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-03-31 02:06:15.580294 | orchestrator | Tuesday 31 March 2026 02:06:10 +0000 (0:00:00.998) 0:03:50.233 ********* 2026-03-31 02:06:15.580305 | orchestrator | changed: [testbed-manager] 2026-03-31 02:06:15.580316 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:06:15.580325 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:06:15.580335 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:06:15.580345 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:06:15.580355 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:06:15.580364 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:06:15.580374 | orchestrator | 2026-03-31 02:06:15.580384 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-03-31 02:06:15.580393 | orchestrator | Tuesday 31 March 2026 02:06:11 +0000 (0:00:01.149) 0:03:51.383 ********* 2026-03-31 02:06:15.580403 | orchestrator | changed: [testbed-manager] 2026-03-31 02:06:15.580412 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:06:15.580422 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:06:15.580432 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:06:15.580441 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:06:15.580451 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:06:15.580460 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:06:15.580470 | orchestrator | 2026-03-31 02:06:15.580479 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-03-31 02:06:15.580489 | orchestrator | Tuesday 31 March 2026 02:06:12 +0000 (0:00:01.212) 0:03:52.596 ********* 2026-03-31 02:06:15.580499 | orchestrator | changed: [testbed-manager] 2026-03-31 02:06:15.580508 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:06:15.580520 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:06:15.580532 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:06:15.580543 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:06:15.580554 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:06:15.580565 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:06:15.580576 | orchestrator | 2026-03-31 02:06:15.580586 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-03-31 02:06:15.580598 | orchestrator | Tuesday 31 March 2026 02:06:14 +0000 (0:00:01.158) 0:03:53.754 ********* 2026-03-31 02:06:15.580609 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:06:15.580621 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:06:15.580637 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:06:15.580649 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:06:15.580660 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:06:15.580670 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:06:15.580681 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:06:15.580692 | orchestrator | 2026-03-31 02:06:15.580703 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-03-31 02:06:15.580715 | orchestrator | Tuesday 31 March 2026 02:06:14 +0000 (0:00:00.287) 0:03:54.041 ********* 2026-03-31 02:06:15.580726 | orchestrator | ok: [testbed-manager] 2026-03-31 02:06:15.580738 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:06:15.580749 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:06:15.580760 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:06:15.580771 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:06:15.580782 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:06:15.580793 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:06:15.580804 | orchestrator | 2026-03-31 02:06:15.580815 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-03-31 02:06:15.580826 | orchestrator | Tuesday 31 March 2026 02:06:15 +0000 (0:00:00.722) 0:03:54.763 ********* 2026-03-31 02:06:15.580839 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:06:15.580858 | orchestrator | 2026-03-31 02:06:15.580870 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-03-31 02:06:15.580886 | orchestrator | Tuesday 31 March 2026 02:06:15 +0000 (0:00:00.460) 0:03:55.224 ********* 2026-03-31 02:07:35.066456 | orchestrator | ok: [testbed-manager] 2026-03-31 02:07:35.066565 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:07:35.066580 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:07:35.066590 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:07:35.066599 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:07:35.066607 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:07:35.066616 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:07:35.066625 | orchestrator | 2026-03-31 02:07:35.066635 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-03-31 02:07:35.066646 | orchestrator | Tuesday 31 March 2026 02:06:24 +0000 (0:00:08.868) 0:04:04.092 ********* 2026-03-31 02:07:35.066655 | orchestrator | ok: [testbed-manager] 2026-03-31 02:07:35.066664 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:07:35.066672 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:07:35.066681 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:07:35.066690 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:07:35.066698 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:07:35.066707 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:07:35.066715 | orchestrator | 2026-03-31 02:07:35.066724 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-03-31 02:07:35.066733 | orchestrator | Tuesday 31 March 2026 02:06:25 +0000 (0:00:01.284) 0:04:05.377 ********* 2026-03-31 02:07:35.066742 | orchestrator | ok: [testbed-manager] 2026-03-31 02:07:35.066750 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:07:35.066759 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:07:35.066768 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:07:35.066776 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:07:35.066785 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:07:35.066793 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:07:35.066802 | orchestrator | 2026-03-31 02:07:35.066811 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-03-31 02:07:35.066819 | orchestrator | Tuesday 31 March 2026 02:06:26 +0000 (0:00:01.200) 0:04:06.578 ********* 2026-03-31 02:07:35.066828 | orchestrator | ok: [testbed-manager] 2026-03-31 02:07:35.066837 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:07:35.066845 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:07:35.066854 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:07:35.066863 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:07:35.066871 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:07:35.066880 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:07:35.066889 | orchestrator | 2026-03-31 02:07:35.066898 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-03-31 02:07:35.066907 | orchestrator | Tuesday 31 March 2026 02:06:27 +0000 (0:00:00.337) 0:04:06.916 ********* 2026-03-31 02:07:35.066916 | orchestrator | ok: [testbed-manager] 2026-03-31 02:07:35.066924 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:07:35.066933 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:07:35.066942 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:07:35.066950 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:07:35.066959 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:07:35.066967 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:07:35.066978 | orchestrator | 2026-03-31 02:07:35.066989 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-03-31 02:07:35.066999 | orchestrator | Tuesday 31 March 2026 02:06:27 +0000 (0:00:00.314) 0:04:07.230 ********* 2026-03-31 02:07:35.067009 | orchestrator | ok: [testbed-manager] 2026-03-31 02:07:35.067019 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:07:35.067030 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:07:35.067064 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:07:35.067074 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:07:35.067084 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:07:35.067094 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:07:35.067104 | orchestrator | 2026-03-31 02:07:35.067114 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-03-31 02:07:35.067124 | orchestrator | Tuesday 31 March 2026 02:06:27 +0000 (0:00:00.295) 0:04:07.525 ********* 2026-03-31 02:07:35.067135 | orchestrator | ok: [testbed-manager] 2026-03-31 02:07:35.067145 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:07:35.067186 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:07:35.067197 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:07:35.067207 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:07:35.067217 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:07:35.067227 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:07:35.067237 | orchestrator | 2026-03-31 02:07:35.067247 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-03-31 02:07:35.067258 | orchestrator | Tuesday 31 March 2026 02:06:33 +0000 (0:00:05.879) 0:04:13.405 ********* 2026-03-31 02:07:35.067270 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:07:35.067283 | orchestrator | 2026-03-31 02:07:35.067294 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-03-31 02:07:35.067304 | orchestrator | Tuesday 31 March 2026 02:06:34 +0000 (0:00:00.455) 0:04:13.861 ********* 2026-03-31 02:07:35.067315 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-03-31 02:07:35.067325 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-03-31 02:07:35.067336 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-03-31 02:07:35.067344 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:07:35.067353 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-03-31 02:07:35.067380 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-03-31 02:07:35.067394 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-03-31 02:07:35.067409 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:07:35.067423 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:07:35.067437 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-03-31 02:07:35.067451 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-03-31 02:07:35.067465 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-03-31 02:07:35.067478 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-03-31 02:07:35.067492 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:07:35.067504 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-03-31 02:07:35.067517 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-03-31 02:07:35.067551 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:07:35.067567 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:07:35.067582 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-03-31 02:07:35.067597 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-03-31 02:07:35.067611 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:07:35.067626 | orchestrator | 2026-03-31 02:07:35.067635 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-03-31 02:07:35.067644 | orchestrator | Tuesday 31 March 2026 02:06:34 +0000 (0:00:00.369) 0:04:14.230 ********* 2026-03-31 02:07:35.067653 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:07:35.067662 | orchestrator | 2026-03-31 02:07:35.067670 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-03-31 02:07:35.067689 | orchestrator | Tuesday 31 March 2026 02:06:35 +0000 (0:00:00.425) 0:04:14.655 ********* 2026-03-31 02:07:35.067698 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-03-31 02:07:35.067706 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-03-31 02:07:35.067715 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:07:35.067724 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:07:35.067732 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-03-31 02:07:35.067741 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:07:35.067749 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-03-31 02:07:35.067758 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-03-31 02:07:35.067766 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:07:35.067775 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-03-31 02:07:35.067783 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:07:35.067792 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:07:35.067800 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-03-31 02:07:35.067809 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:07:35.067817 | orchestrator | 2026-03-31 02:07:35.067826 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-03-31 02:07:35.067835 | orchestrator | Tuesday 31 March 2026 02:06:35 +0000 (0:00:00.333) 0:04:14.989 ********* 2026-03-31 02:07:35.067844 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:07:35.067853 | orchestrator | 2026-03-31 02:07:35.067861 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-03-31 02:07:35.067870 | orchestrator | Tuesday 31 March 2026 02:06:35 +0000 (0:00:00.609) 0:04:15.598 ********* 2026-03-31 02:07:35.067878 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:07:35.067887 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:07:35.067895 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:07:35.067904 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:07:35.067912 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:07:35.067921 | orchestrator | changed: [testbed-manager] 2026-03-31 02:07:35.067929 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:07:35.067938 | orchestrator | 2026-03-31 02:07:35.067946 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-03-31 02:07:35.067955 | orchestrator | Tuesday 31 March 2026 02:07:11 +0000 (0:00:35.507) 0:04:51.106 ********* 2026-03-31 02:07:35.067964 | orchestrator | changed: [testbed-manager] 2026-03-31 02:07:35.067972 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:07:35.067981 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:07:35.067989 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:07:35.067997 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:07:35.068006 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:07:35.068014 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:07:35.068023 | orchestrator | 2026-03-31 02:07:35.068032 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-03-31 02:07:35.068046 | orchestrator | Tuesday 31 March 2026 02:07:19 +0000 (0:00:07.947) 0:04:59.053 ********* 2026-03-31 02:07:35.068055 | orchestrator | changed: [testbed-manager] 2026-03-31 02:07:35.068064 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:07:35.068072 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:07:35.068081 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:07:35.068089 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:07:35.068098 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:07:35.068106 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:07:35.068115 | orchestrator | 2026-03-31 02:07:35.068123 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-03-31 02:07:35.068138 | orchestrator | Tuesday 31 March 2026 02:07:27 +0000 (0:00:08.002) 0:05:07.056 ********* 2026-03-31 02:07:35.068171 | orchestrator | ok: [testbed-manager] 2026-03-31 02:07:35.068182 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:07:35.068190 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:07:35.068199 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:07:35.068217 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:07:35.068226 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:07:35.068288 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:07:35.068297 | orchestrator | 2026-03-31 02:07:35.068307 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-03-31 02:07:35.068316 | orchestrator | Tuesday 31 March 2026 02:07:29 +0000 (0:00:01.785) 0:05:08.841 ********* 2026-03-31 02:07:35.068325 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:07:35.068333 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:07:35.068342 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:07:35.068350 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:07:35.068359 | orchestrator | changed: [testbed-manager] 2026-03-31 02:07:35.068368 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:07:35.068376 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:07:35.068385 | orchestrator | 2026-03-31 02:07:35.068402 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-03-31 02:07:46.761899 | orchestrator | Tuesday 31 March 2026 02:07:35 +0000 (0:00:05.857) 0:05:14.699 ********* 2026-03-31 02:07:46.762213 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:07:46.762244 | orchestrator | 2026-03-31 02:07:46.762267 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-03-31 02:07:46.762286 | orchestrator | Tuesday 31 March 2026 02:07:35 +0000 (0:00:00.565) 0:05:15.264 ********* 2026-03-31 02:07:46.762305 | orchestrator | changed: [testbed-manager] 2026-03-31 02:07:46.762324 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:07:46.762342 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:07:46.762360 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:07:46.762378 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:07:46.762395 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:07:46.762414 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:07:46.762433 | orchestrator | 2026-03-31 02:07:46.762451 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-03-31 02:07:46.762472 | orchestrator | Tuesday 31 March 2026 02:07:36 +0000 (0:00:00.799) 0:05:16.064 ********* 2026-03-31 02:07:46.762491 | orchestrator | ok: [testbed-manager] 2026-03-31 02:07:46.762512 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:07:46.762532 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:07:46.762550 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:07:46.762571 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:07:46.762590 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:07:46.762610 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:07:46.762629 | orchestrator | 2026-03-31 02:07:46.762648 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-03-31 02:07:46.762667 | orchestrator | Tuesday 31 March 2026 02:07:38 +0000 (0:00:01.712) 0:05:17.776 ********* 2026-03-31 02:07:46.762685 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:07:46.762704 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:07:46.762722 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:07:46.762742 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:07:46.762762 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:07:46.762782 | orchestrator | changed: [testbed-manager] 2026-03-31 02:07:46.762802 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:07:46.762820 | orchestrator | 2026-03-31 02:07:46.762840 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-03-31 02:07:46.762860 | orchestrator | Tuesday 31 March 2026 02:07:38 +0000 (0:00:00.806) 0:05:18.582 ********* 2026-03-31 02:07:46.762912 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:07:46.762933 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:07:46.762953 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:07:46.762971 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:07:46.762988 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:07:46.763007 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:07:46.763027 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:07:46.763046 | orchestrator | 2026-03-31 02:07:46.763064 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-03-31 02:07:46.763082 | orchestrator | Tuesday 31 March 2026 02:07:39 +0000 (0:00:00.321) 0:05:18.904 ********* 2026-03-31 02:07:46.763101 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:07:46.763120 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:07:46.763139 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:07:46.763183 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:07:46.763202 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:07:46.763221 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:07:46.763240 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:07:46.763260 | orchestrator | 2026-03-31 02:07:46.763278 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-03-31 02:07:46.763297 | orchestrator | Tuesday 31 March 2026 02:07:39 +0000 (0:00:00.400) 0:05:19.304 ********* 2026-03-31 02:07:46.763315 | orchestrator | ok: [testbed-manager] 2026-03-31 02:07:46.763333 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:07:46.763354 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:07:46.763372 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:07:46.763390 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:07:46.763407 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:07:46.763426 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:07:46.763444 | orchestrator | 2026-03-31 02:07:46.763463 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-03-31 02:07:46.763502 | orchestrator | Tuesday 31 March 2026 02:07:39 +0000 (0:00:00.298) 0:05:19.602 ********* 2026-03-31 02:07:46.763521 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:07:46.763540 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:07:46.763558 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:07:46.763576 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:07:46.763596 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:07:46.763615 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:07:46.763634 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:07:46.763653 | orchestrator | 2026-03-31 02:07:46.763673 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-03-31 02:07:46.763692 | orchestrator | Tuesday 31 March 2026 02:07:40 +0000 (0:00:00.327) 0:05:19.930 ********* 2026-03-31 02:07:46.763711 | orchestrator | ok: [testbed-manager] 2026-03-31 02:07:46.763730 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:07:46.763749 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:07:46.763769 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:07:46.763788 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:07:46.763806 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:07:46.763824 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:07:46.763844 | orchestrator | 2026-03-31 02:07:46.763864 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-03-31 02:07:46.763883 | orchestrator | Tuesday 31 March 2026 02:07:40 +0000 (0:00:00.335) 0:05:20.266 ********* 2026-03-31 02:07:46.763902 | orchestrator | ok: [testbed-manager] =>  2026-03-31 02:07:46.763920 | orchestrator |  docker_version: 5:27.5.1 2026-03-31 02:07:46.763941 | orchestrator | ok: [testbed-node-3] =>  2026-03-31 02:07:46.763960 | orchestrator |  docker_version: 5:27.5.1 2026-03-31 02:07:46.763979 | orchestrator | ok: [testbed-node-4] =>  2026-03-31 02:07:46.763997 | orchestrator |  docker_version: 5:27.5.1 2026-03-31 02:07:46.764016 | orchestrator | ok: [testbed-node-5] =>  2026-03-31 02:07:46.764034 | orchestrator |  docker_version: 5:27.5.1 2026-03-31 02:07:46.764081 | orchestrator | ok: [testbed-node-0] =>  2026-03-31 02:07:46.764116 | orchestrator |  docker_version: 5:27.5.1 2026-03-31 02:07:46.764136 | orchestrator | ok: [testbed-node-1] =>  2026-03-31 02:07:46.764205 | orchestrator |  docker_version: 5:27.5.1 2026-03-31 02:07:46.764226 | orchestrator | ok: [testbed-node-2] =>  2026-03-31 02:07:46.764245 | orchestrator |  docker_version: 5:27.5.1 2026-03-31 02:07:46.764265 | orchestrator | 2026-03-31 02:07:46.764284 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-03-31 02:07:46.764302 | orchestrator | Tuesday 31 March 2026 02:07:40 +0000 (0:00:00.281) 0:05:20.548 ********* 2026-03-31 02:07:46.764320 | orchestrator | ok: [testbed-manager] =>  2026-03-31 02:07:46.764339 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-31 02:07:46.764357 | orchestrator | ok: [testbed-node-3] =>  2026-03-31 02:07:46.764372 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-31 02:07:46.764383 | orchestrator | ok: [testbed-node-4] =>  2026-03-31 02:07:46.764393 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-31 02:07:46.764404 | orchestrator | ok: [testbed-node-5] =>  2026-03-31 02:07:46.764415 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-31 02:07:46.764425 | orchestrator | ok: [testbed-node-0] =>  2026-03-31 02:07:46.764436 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-31 02:07:46.764447 | orchestrator | ok: [testbed-node-1] =>  2026-03-31 02:07:46.764457 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-31 02:07:46.764468 | orchestrator | ok: [testbed-node-2] =>  2026-03-31 02:07:46.764479 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-31 02:07:46.764490 | orchestrator | 2026-03-31 02:07:46.764501 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-03-31 02:07:46.764512 | orchestrator | Tuesday 31 March 2026 02:07:41 +0000 (0:00:00.344) 0:05:20.892 ********* 2026-03-31 02:07:46.764523 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:07:46.764533 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:07:46.764544 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:07:46.764555 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:07:46.764565 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:07:46.764576 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:07:46.764587 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:07:46.764597 | orchestrator | 2026-03-31 02:07:46.764608 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-03-31 02:07:46.764620 | orchestrator | Tuesday 31 March 2026 02:07:41 +0000 (0:00:00.326) 0:05:21.219 ********* 2026-03-31 02:07:46.764630 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:07:46.764641 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:07:46.764651 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:07:46.764661 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:07:46.764670 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:07:46.764679 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:07:46.764689 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:07:46.764698 | orchestrator | 2026-03-31 02:07:46.764708 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-03-31 02:07:46.764718 | orchestrator | Tuesday 31 March 2026 02:07:41 +0000 (0:00:00.301) 0:05:21.520 ********* 2026-03-31 02:07:46.764730 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:07:46.764742 | orchestrator | 2026-03-31 02:07:46.764752 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-03-31 02:07:46.764762 | orchestrator | Tuesday 31 March 2026 02:07:42 +0000 (0:00:00.447) 0:05:21.968 ********* 2026-03-31 02:07:46.764771 | orchestrator | ok: [testbed-manager] 2026-03-31 02:07:46.764781 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:07:46.764790 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:07:46.764800 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:07:46.764809 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:07:46.764828 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:07:46.764837 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:07:46.764847 | orchestrator | 2026-03-31 02:07:46.764857 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-03-31 02:07:46.764867 | orchestrator | Tuesday 31 March 2026 02:07:43 +0000 (0:00:00.988) 0:05:22.956 ********* 2026-03-31 02:07:46.764876 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:07:46.764885 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:07:46.764895 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:07:46.764904 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:07:46.764914 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:07:46.764931 | orchestrator | ok: [testbed-manager] 2026-03-31 02:07:46.764941 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:07:46.764950 | orchestrator | 2026-03-31 02:07:46.764960 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-03-31 02:07:46.764971 | orchestrator | Tuesday 31 March 2026 02:07:46 +0000 (0:00:02.974) 0:05:25.931 ********* 2026-03-31 02:07:46.764980 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-03-31 02:07:46.764990 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-03-31 02:07:46.765000 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-03-31 02:07:46.765009 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-03-31 02:07:46.765019 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-03-31 02:07:46.765029 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-03-31 02:07:46.765038 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:07:46.765048 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-03-31 02:07:46.765057 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-03-31 02:07:46.765067 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-03-31 02:07:46.765076 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:07:46.765086 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-03-31 02:07:46.765095 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-03-31 02:07:46.765105 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-03-31 02:07:46.765114 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:07:46.765124 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-03-31 02:07:46.765143 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-03-31 02:08:47.325294 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-03-31 02:08:47.325383 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:08:47.325392 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-03-31 02:08:47.325398 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-03-31 02:08:47.325404 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-03-31 02:08:47.325408 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:08:47.325413 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:08:47.325418 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-03-31 02:08:47.325423 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-03-31 02:08:47.325428 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-03-31 02:08:47.325432 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:08:47.325437 | orchestrator | 2026-03-31 02:08:47.325443 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-03-31 02:08:47.325449 | orchestrator | Tuesday 31 March 2026 02:07:46 +0000 (0:00:00.691) 0:05:26.622 ********* 2026-03-31 02:08:47.325454 | orchestrator | ok: [testbed-manager] 2026-03-31 02:08:47.325459 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:08:47.325464 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:08:47.325468 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:08:47.325473 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:08:47.325478 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:08:47.325501 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:08:47.325506 | orchestrator | 2026-03-31 02:08:47.325510 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-03-31 02:08:47.325515 | orchestrator | Tuesday 31 March 2026 02:07:53 +0000 (0:00:06.580) 0:05:33.203 ********* 2026-03-31 02:08:47.325519 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:08:47.325524 | orchestrator | ok: [testbed-manager] 2026-03-31 02:08:47.325529 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:08:47.325533 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:08:47.325538 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:08:47.325542 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:08:47.325547 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:08:47.325551 | orchestrator | 2026-03-31 02:08:47.325556 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-03-31 02:08:47.325560 | orchestrator | Tuesday 31 March 2026 02:07:54 +0000 (0:00:01.038) 0:05:34.241 ********* 2026-03-31 02:08:47.325565 | orchestrator | ok: [testbed-manager] 2026-03-31 02:08:47.325570 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:08:47.325574 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:08:47.325579 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:08:47.325583 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:08:47.325588 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:08:47.325592 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:08:47.325597 | orchestrator | 2026-03-31 02:08:47.325601 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-03-31 02:08:47.325606 | orchestrator | Tuesday 31 March 2026 02:08:02 +0000 (0:00:08.269) 0:05:42.511 ********* 2026-03-31 02:08:47.325611 | orchestrator | changed: [testbed-manager] 2026-03-31 02:08:47.325615 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:08:47.325620 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:08:47.325624 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:08:47.325629 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:08:47.325633 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:08:47.325638 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:08:47.325643 | orchestrator | 2026-03-31 02:08:47.325647 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-03-31 02:08:47.325652 | orchestrator | Tuesday 31 March 2026 02:08:06 +0000 (0:00:04.067) 0:05:46.578 ********* 2026-03-31 02:08:47.325657 | orchestrator | ok: [testbed-manager] 2026-03-31 02:08:47.325661 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:08:47.325666 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:08:47.325670 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:08:47.325675 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:08:47.325679 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:08:47.325684 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:08:47.325688 | orchestrator | 2026-03-31 02:08:47.325693 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-03-31 02:08:47.325698 | orchestrator | Tuesday 31 March 2026 02:08:08 +0000 (0:00:01.344) 0:05:47.922 ********* 2026-03-31 02:08:47.325702 | orchestrator | ok: [testbed-manager] 2026-03-31 02:08:47.325707 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:08:47.325712 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:08:47.325716 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:08:47.325721 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:08:47.325725 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:08:47.325730 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:08:47.325735 | orchestrator | 2026-03-31 02:08:47.325740 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-03-31 02:08:47.325744 | orchestrator | Tuesday 31 March 2026 02:08:09 +0000 (0:00:01.572) 0:05:49.495 ********* 2026-03-31 02:08:47.325749 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:08:47.325753 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:08:47.325758 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:08:47.325762 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:08:47.325772 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:08:47.325777 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:08:47.325781 | orchestrator | changed: [testbed-manager] 2026-03-31 02:08:47.325786 | orchestrator | 2026-03-31 02:08:47.325790 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-03-31 02:08:47.325795 | orchestrator | Tuesday 31 March 2026 02:08:10 +0000 (0:00:00.665) 0:05:50.161 ********* 2026-03-31 02:08:47.325799 | orchestrator | ok: [testbed-manager] 2026-03-31 02:08:47.325804 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:08:47.325808 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:08:47.325813 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:08:47.325817 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:08:47.325822 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:08:47.325826 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:08:47.325831 | orchestrator | 2026-03-31 02:08:47.325835 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-03-31 02:08:47.325852 | orchestrator | Tuesday 31 March 2026 02:08:20 +0000 (0:00:09.553) 0:05:59.715 ********* 2026-03-31 02:08:47.325858 | orchestrator | changed: [testbed-manager] 2026-03-31 02:08:47.325864 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:08:47.325869 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:08:47.325874 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:08:47.325879 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:08:47.325884 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:08:47.325889 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:08:47.325895 | orchestrator | 2026-03-31 02:08:47.325900 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-03-31 02:08:47.325906 | orchestrator | Tuesday 31 March 2026 02:08:20 +0000 (0:00:00.931) 0:06:00.647 ********* 2026-03-31 02:08:47.325911 | orchestrator | ok: [testbed-manager] 2026-03-31 02:08:47.325917 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:08:47.325922 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:08:47.325927 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:08:47.325931 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:08:47.325936 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:08:47.325940 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:08:47.325945 | orchestrator | 2026-03-31 02:08:47.325950 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-03-31 02:08:47.325954 | orchestrator | Tuesday 31 March 2026 02:08:30 +0000 (0:00:09.333) 0:06:09.980 ********* 2026-03-31 02:08:47.325959 | orchestrator | ok: [testbed-manager] 2026-03-31 02:08:47.325963 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:08:47.325968 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:08:47.325972 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:08:47.325977 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:08:47.325981 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:08:47.325986 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:08:47.325991 | orchestrator | 2026-03-31 02:08:47.325995 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-03-31 02:08:47.326000 | orchestrator | Tuesday 31 March 2026 02:08:40 +0000 (0:00:10.437) 0:06:20.418 ********* 2026-03-31 02:08:47.326005 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-03-31 02:08:47.326009 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-03-31 02:08:47.326056 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-03-31 02:08:47.326065 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-03-31 02:08:47.326072 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-03-31 02:08:47.326079 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-03-31 02:08:47.326086 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-03-31 02:08:47.326094 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-03-31 02:08:47.326101 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-03-31 02:08:47.326118 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-03-31 02:08:47.326126 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-03-31 02:08:47.326225 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-03-31 02:08:47.326236 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-03-31 02:08:47.326243 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-03-31 02:08:47.326251 | orchestrator | 2026-03-31 02:08:47.326258 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-03-31 02:08:47.326266 | orchestrator | Tuesday 31 March 2026 02:08:41 +0000 (0:00:01.207) 0:06:21.625 ********* 2026-03-31 02:08:47.326273 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:08:47.326278 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:08:47.326282 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:08:47.326287 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:08:47.326291 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:08:47.326296 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:08:47.326300 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:08:47.326305 | orchestrator | 2026-03-31 02:08:47.326309 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-03-31 02:08:47.326314 | orchestrator | Tuesday 31 March 2026 02:08:42 +0000 (0:00:00.552) 0:06:22.178 ********* 2026-03-31 02:08:47.326319 | orchestrator | ok: [testbed-manager] 2026-03-31 02:08:47.326323 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:08:47.326328 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:08:47.326332 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:08:47.326337 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:08:47.326341 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:08:47.326354 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:08:47.326358 | orchestrator | 2026-03-31 02:08:47.326363 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-03-31 02:08:47.326369 | orchestrator | Tuesday 31 March 2026 02:08:46 +0000 (0:00:03.736) 0:06:25.914 ********* 2026-03-31 02:08:47.326373 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:08:47.326378 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:08:47.326382 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:08:47.326387 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:08:47.326391 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:08:47.326396 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:08:47.326400 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:08:47.326405 | orchestrator | 2026-03-31 02:08:47.326411 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-03-31 02:08:47.326419 | orchestrator | Tuesday 31 March 2026 02:08:46 +0000 (0:00:00.553) 0:06:26.468 ********* 2026-03-31 02:08:47.326426 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-03-31 02:08:47.326433 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-03-31 02:08:47.326440 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:08:47.326446 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-03-31 02:08:47.326452 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-03-31 02:08:47.326459 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:08:47.326466 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-03-31 02:08:47.326473 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-03-31 02:08:47.326480 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:08:47.326494 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-03-31 02:09:07.397504 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-03-31 02:09:07.397621 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:09:07.397636 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-03-31 02:09:07.397648 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-03-31 02:09:07.397659 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:09:07.397697 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-03-31 02:09:07.397709 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-03-31 02:09:07.397720 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:09:07.397730 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-03-31 02:09:07.397741 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-03-31 02:09:07.397752 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:09:07.397763 | orchestrator | 2026-03-31 02:09:07.397776 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-03-31 02:09:07.397788 | orchestrator | Tuesday 31 March 2026 02:08:47 +0000 (0:00:00.764) 0:06:27.233 ********* 2026-03-31 02:09:07.397799 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:09:07.397810 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:09:07.397820 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:09:07.397831 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:09:07.397842 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:09:07.397853 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:09:07.397863 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:09:07.397874 | orchestrator | 2026-03-31 02:09:07.397885 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-03-31 02:09:07.397896 | orchestrator | Tuesday 31 March 2026 02:08:48 +0000 (0:00:00.555) 0:06:27.788 ********* 2026-03-31 02:09:07.397907 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:09:07.397918 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:09:07.397928 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:09:07.397939 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:09:07.397950 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:09:07.397961 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:09:07.397971 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:09:07.397982 | orchestrator | 2026-03-31 02:09:07.397993 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-03-31 02:09:07.398004 | orchestrator | Tuesday 31 March 2026 02:08:48 +0000 (0:00:00.603) 0:06:28.392 ********* 2026-03-31 02:09:07.398080 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:09:07.398095 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:09:07.398108 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:09:07.398120 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:09:07.398133 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:09:07.398145 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:09:07.398157 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:09:07.398170 | orchestrator | 2026-03-31 02:09:07.398184 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-03-31 02:09:07.398197 | orchestrator | Tuesday 31 March 2026 02:08:49 +0000 (0:00:00.582) 0:06:28.974 ********* 2026-03-31 02:09:07.398209 | orchestrator | ok: [testbed-manager] 2026-03-31 02:09:07.398280 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:09:07.398302 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:09:07.398323 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:09:07.398341 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:09:07.398356 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:09:07.398369 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:09:07.398383 | orchestrator | 2026-03-31 02:09:07.398395 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-03-31 02:09:07.398406 | orchestrator | Tuesday 31 March 2026 02:08:51 +0000 (0:00:01.944) 0:06:30.919 ********* 2026-03-31 02:09:07.398417 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:09:07.398432 | orchestrator | 2026-03-31 02:09:07.398443 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-03-31 02:09:07.398454 | orchestrator | Tuesday 31 March 2026 02:08:52 +0000 (0:00:00.988) 0:06:31.908 ********* 2026-03-31 02:09:07.398483 | orchestrator | ok: [testbed-manager] 2026-03-31 02:09:07.398495 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:09:07.398520 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:09:07.398531 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:09:07.398542 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:09:07.398552 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:09:07.398563 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:09:07.398574 | orchestrator | 2026-03-31 02:09:07.398584 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-03-31 02:09:07.398595 | orchestrator | Tuesday 31 March 2026 02:08:53 +0000 (0:00:00.940) 0:06:32.849 ********* 2026-03-31 02:09:07.398606 | orchestrator | ok: [testbed-manager] 2026-03-31 02:09:07.398617 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:09:07.398627 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:09:07.398638 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:09:07.398649 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:09:07.398659 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:09:07.398670 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:09:07.398681 | orchestrator | 2026-03-31 02:09:07.398692 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-03-31 02:09:07.398703 | orchestrator | Tuesday 31 March 2026 02:08:54 +0000 (0:00:00.845) 0:06:33.694 ********* 2026-03-31 02:09:07.398714 | orchestrator | ok: [testbed-manager] 2026-03-31 02:09:07.398725 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:09:07.398735 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:09:07.398746 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:09:07.398757 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:09:07.398768 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:09:07.398778 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:09:07.398806 | orchestrator | 2026-03-31 02:09:07.398818 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-03-31 02:09:07.398858 | orchestrator | Tuesday 31 March 2026 02:08:55 +0000 (0:00:01.606) 0:06:35.301 ********* 2026-03-31 02:09:07.398870 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:09:07.398882 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:09:07.398893 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:09:07.398903 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:09:07.398914 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:09:07.398925 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:09:07.398936 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:09:07.398946 | orchestrator | 2026-03-31 02:09:07.398957 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-03-31 02:09:07.398968 | orchestrator | Tuesday 31 March 2026 02:08:57 +0000 (0:00:01.396) 0:06:36.698 ********* 2026-03-31 02:09:07.398979 | orchestrator | ok: [testbed-manager] 2026-03-31 02:09:07.398990 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:09:07.399001 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:09:07.399011 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:09:07.399022 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:09:07.399033 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:09:07.399044 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:09:07.399055 | orchestrator | 2026-03-31 02:09:07.399066 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-03-31 02:09:07.399076 | orchestrator | Tuesday 31 March 2026 02:08:58 +0000 (0:00:01.343) 0:06:38.041 ********* 2026-03-31 02:09:07.399087 | orchestrator | changed: [testbed-manager] 2026-03-31 02:09:07.399097 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:09:07.399108 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:09:07.399119 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:09:07.399129 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:09:07.399140 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:09:07.399150 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:09:07.399161 | orchestrator | 2026-03-31 02:09:07.399181 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-03-31 02:09:07.399192 | orchestrator | Tuesday 31 March 2026 02:08:59 +0000 (0:00:01.399) 0:06:39.440 ********* 2026-03-31 02:09:07.399203 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:09:07.399214 | orchestrator | 2026-03-31 02:09:07.399251 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-03-31 02:09:07.399271 | orchestrator | Tuesday 31 March 2026 02:09:00 +0000 (0:00:01.126) 0:06:40.566 ********* 2026-03-31 02:09:07.399290 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:09:07.399308 | orchestrator | ok: [testbed-manager] 2026-03-31 02:09:07.399326 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:09:07.399338 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:09:07.399348 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:09:07.399359 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:09:07.399369 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:09:07.399380 | orchestrator | 2026-03-31 02:09:07.399391 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-03-31 02:09:07.399406 | orchestrator | Tuesday 31 March 2026 02:09:02 +0000 (0:00:01.434) 0:06:42.001 ********* 2026-03-31 02:09:07.399423 | orchestrator | ok: [testbed-manager] 2026-03-31 02:09:07.399441 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:09:07.399459 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:09:07.399477 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:09:07.399495 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:09:07.399507 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:09:07.399517 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:09:07.399528 | orchestrator | 2026-03-31 02:09:07.399539 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-03-31 02:09:07.399550 | orchestrator | Tuesday 31 March 2026 02:09:03 +0000 (0:00:01.182) 0:06:43.183 ********* 2026-03-31 02:09:07.399561 | orchestrator | ok: [testbed-manager] 2026-03-31 02:09:07.399572 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:09:07.399582 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:09:07.399593 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:09:07.399603 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:09:07.399614 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:09:07.399625 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:09:07.399635 | orchestrator | 2026-03-31 02:09:07.399646 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-03-31 02:09:07.399657 | orchestrator | Tuesday 31 March 2026 02:09:04 +0000 (0:00:01.175) 0:06:44.358 ********* 2026-03-31 02:09:07.399668 | orchestrator | ok: [testbed-manager] 2026-03-31 02:09:07.399695 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:09:07.399706 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:09:07.399716 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:09:07.399727 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:09:07.399738 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:09:07.399748 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:09:07.399759 | orchestrator | 2026-03-31 02:09:07.399770 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-03-31 02:09:07.399780 | orchestrator | Tuesday 31 March 2026 02:09:06 +0000 (0:00:01.384) 0:06:45.743 ********* 2026-03-31 02:09:07.399791 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:09:07.399802 | orchestrator | 2026-03-31 02:09:07.399813 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-31 02:09:07.399824 | orchestrator | Tuesday 31 March 2026 02:09:07 +0000 (0:00:00.975) 0:06:46.719 ********* 2026-03-31 02:09:07.399835 | orchestrator | 2026-03-31 02:09:07.399845 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-31 02:09:07.399865 | orchestrator | Tuesday 31 March 2026 02:09:07 +0000 (0:00:00.041) 0:06:46.760 ********* 2026-03-31 02:09:07.399876 | orchestrator | 2026-03-31 02:09:07.399887 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-31 02:09:07.399897 | orchestrator | Tuesday 31 March 2026 02:09:07 +0000 (0:00:00.042) 0:06:46.803 ********* 2026-03-31 02:09:07.399908 | orchestrator | 2026-03-31 02:09:07.399919 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-31 02:09:07.399939 | orchestrator | Tuesday 31 March 2026 02:09:07 +0000 (0:00:00.051) 0:06:46.855 ********* 2026-03-31 02:09:33.925624 | orchestrator | 2026-03-31 02:09:33.925742 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-31 02:09:33.925759 | orchestrator | Tuesday 31 March 2026 02:09:07 +0000 (0:00:00.042) 0:06:46.898 ********* 2026-03-31 02:09:33.925771 | orchestrator | 2026-03-31 02:09:33.925783 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-31 02:09:33.925794 | orchestrator | Tuesday 31 March 2026 02:09:07 +0000 (0:00:00.043) 0:06:46.941 ********* 2026-03-31 02:09:33.925805 | orchestrator | 2026-03-31 02:09:33.925816 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-31 02:09:33.925827 | orchestrator | Tuesday 31 March 2026 02:09:07 +0000 (0:00:00.047) 0:06:46.989 ********* 2026-03-31 02:09:33.925837 | orchestrator | 2026-03-31 02:09:33.925848 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-31 02:09:33.925860 | orchestrator | Tuesday 31 March 2026 02:09:07 +0000 (0:00:00.041) 0:06:47.031 ********* 2026-03-31 02:09:33.925871 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:09:33.925883 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:09:33.925894 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:09:33.925905 | orchestrator | 2026-03-31 02:09:33.925916 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-03-31 02:09:33.925927 | orchestrator | Tuesday 31 March 2026 02:09:08 +0000 (0:00:01.151) 0:06:48.182 ********* 2026-03-31 02:09:33.925938 | orchestrator | changed: [testbed-manager] 2026-03-31 02:09:33.925950 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:09:33.925961 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:09:33.925972 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:09:33.925982 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:09:33.925995 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:09:33.926084 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:09:33.926110 | orchestrator | 2026-03-31 02:09:33.926129 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-03-31 02:09:33.926141 | orchestrator | Tuesday 31 March 2026 02:09:10 +0000 (0:00:01.597) 0:06:49.780 ********* 2026-03-31 02:09:33.926151 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:09:33.926162 | orchestrator | changed: [testbed-manager] 2026-03-31 02:09:33.926173 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:09:33.926184 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:09:33.926195 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:09:33.926206 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:09:33.926216 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:09:33.926227 | orchestrator | 2026-03-31 02:09:33.926238 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-03-31 02:09:33.926249 | orchestrator | Tuesday 31 March 2026 02:09:11 +0000 (0:00:01.189) 0:06:50.970 ********* 2026-03-31 02:09:33.926259 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:09:33.926270 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:09:33.926281 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:09:33.926291 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:09:33.926302 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:09:33.926313 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:09:33.926346 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:09:33.926357 | orchestrator | 2026-03-31 02:09:33.926368 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-03-31 02:09:33.926379 | orchestrator | Tuesday 31 March 2026 02:09:13 +0000 (0:00:02.317) 0:06:53.288 ********* 2026-03-31 02:09:33.926416 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:09:33.926428 | orchestrator | 2026-03-31 02:09:33.926439 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-03-31 02:09:33.926450 | orchestrator | Tuesday 31 March 2026 02:09:13 +0000 (0:00:00.120) 0:06:53.408 ********* 2026-03-31 02:09:33.926461 | orchestrator | ok: [testbed-manager] 2026-03-31 02:09:33.926472 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:09:33.926483 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:09:33.926493 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:09:33.926504 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:09:33.926515 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:09:33.926526 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:09:33.926536 | orchestrator | 2026-03-31 02:09:33.926547 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-03-31 02:09:33.926559 | orchestrator | Tuesday 31 March 2026 02:09:14 +0000 (0:00:01.044) 0:06:54.453 ********* 2026-03-31 02:09:33.926569 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:09:33.926595 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:09:33.926607 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:09:33.926617 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:09:33.926628 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:09:33.926638 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:09:33.926649 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:09:33.926660 | orchestrator | 2026-03-31 02:09:33.926670 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-03-31 02:09:33.926681 | orchestrator | Tuesday 31 March 2026 02:09:15 +0000 (0:00:00.635) 0:06:55.089 ********* 2026-03-31 02:09:33.926693 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:09:33.926707 | orchestrator | 2026-03-31 02:09:33.926718 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-03-31 02:09:33.926728 | orchestrator | Tuesday 31 March 2026 02:09:16 +0000 (0:00:01.121) 0:06:56.211 ********* 2026-03-31 02:09:33.926739 | orchestrator | ok: [testbed-manager] 2026-03-31 02:09:33.926749 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:09:33.926760 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:09:33.926771 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:09:33.926781 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:09:33.926792 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:09:33.926803 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:09:33.926813 | orchestrator | 2026-03-31 02:09:33.926824 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-03-31 02:09:33.926835 | orchestrator | Tuesday 31 March 2026 02:09:17 +0000 (0:00:00.852) 0:06:57.063 ********* 2026-03-31 02:09:33.926846 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-03-31 02:09:33.926875 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-03-31 02:09:33.926888 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-03-31 02:09:33.926899 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-03-31 02:09:33.926909 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-03-31 02:09:33.926920 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-03-31 02:09:33.926931 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-03-31 02:09:33.926941 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-03-31 02:09:33.926952 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-03-31 02:09:33.926963 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-03-31 02:09:33.926974 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-03-31 02:09:33.926984 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-03-31 02:09:33.927003 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-03-31 02:09:33.927014 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-03-31 02:09:33.927043 | orchestrator | 2026-03-31 02:09:33.927066 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-03-31 02:09:33.927078 | orchestrator | Tuesday 31 March 2026 02:09:19 +0000 (0:00:02.582) 0:06:59.646 ********* 2026-03-31 02:09:33.927088 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:09:33.927099 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:09:33.927110 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:09:33.927121 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:09:33.927131 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:09:33.927142 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:09:33.927152 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:09:33.927163 | orchestrator | 2026-03-31 02:09:33.927173 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-03-31 02:09:33.927184 | orchestrator | Tuesday 31 March 2026 02:09:20 +0000 (0:00:00.761) 0:07:00.407 ********* 2026-03-31 02:09:33.927196 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:09:33.927210 | orchestrator | 2026-03-31 02:09:33.927220 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-03-31 02:09:33.927231 | orchestrator | Tuesday 31 March 2026 02:09:21 +0000 (0:00:00.945) 0:07:01.353 ********* 2026-03-31 02:09:33.927242 | orchestrator | ok: [testbed-manager] 2026-03-31 02:09:33.927253 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:09:33.927264 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:09:33.927274 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:09:33.927285 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:09:33.927295 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:09:33.927306 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:09:33.927317 | orchestrator | 2026-03-31 02:09:33.927399 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-03-31 02:09:33.927412 | orchestrator | Tuesday 31 March 2026 02:09:22 +0000 (0:00:00.947) 0:07:02.301 ********* 2026-03-31 02:09:33.927424 | orchestrator | ok: [testbed-manager] 2026-03-31 02:09:33.927435 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:09:33.927447 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:09:33.927458 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:09:33.927470 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:09:33.927481 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:09:33.927492 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:09:33.927503 | orchestrator | 2026-03-31 02:09:33.927515 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-03-31 02:09:33.927527 | orchestrator | Tuesday 31 March 2026 02:09:23 +0000 (0:00:01.022) 0:07:03.324 ********* 2026-03-31 02:09:33.927538 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:09:33.927550 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:09:33.927561 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:09:33.927572 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:09:33.927584 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:09:33.927595 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:09:33.927606 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:09:33.927618 | orchestrator | 2026-03-31 02:09:33.927630 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-03-31 02:09:33.927642 | orchestrator | Tuesday 31 March 2026 02:09:24 +0000 (0:00:00.574) 0:07:03.898 ********* 2026-03-31 02:09:33.927654 | orchestrator | ok: [testbed-manager] 2026-03-31 02:09:33.927665 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:09:33.927677 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:09:33.927689 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:09:33.927700 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:09:33.927720 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:09:33.927731 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:09:33.927741 | orchestrator | 2026-03-31 02:09:33.927752 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-03-31 02:09:33.927762 | orchestrator | Tuesday 31 March 2026 02:09:25 +0000 (0:00:01.562) 0:07:05.461 ********* 2026-03-31 02:09:33.927773 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:09:33.927783 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:09:33.927793 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:09:33.927803 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:09:33.927814 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:09:33.927824 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:09:33.927834 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:09:33.927844 | orchestrator | 2026-03-31 02:09:33.927854 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-03-31 02:09:33.927865 | orchestrator | Tuesday 31 March 2026 02:09:26 +0000 (0:00:00.624) 0:07:06.085 ********* 2026-03-31 02:09:33.927875 | orchestrator | ok: [testbed-manager] 2026-03-31 02:09:33.927885 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:09:33.927895 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:09:33.927905 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:09:33.927915 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:09:33.927925 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:09:33.927944 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:10:06.900161 | orchestrator | 2026-03-31 02:10:06.900266 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-03-31 02:10:06.900283 | orchestrator | Tuesday 31 March 2026 02:09:33 +0000 (0:00:07.475) 0:07:13.561 ********* 2026-03-31 02:10:06.900293 | orchestrator | ok: [testbed-manager] 2026-03-31 02:10:06.900304 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:10:06.900314 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:10:06.900322 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:10:06.900331 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:10:06.900340 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:10:06.900350 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:10:06.900356 | orchestrator | 2026-03-31 02:10:06.900362 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-03-31 02:10:06.900368 | orchestrator | Tuesday 31 March 2026 02:09:35 +0000 (0:00:01.677) 0:07:15.238 ********* 2026-03-31 02:10:06.900373 | orchestrator | ok: [testbed-manager] 2026-03-31 02:10:06.900379 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:10:06.900385 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:10:06.900390 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:10:06.900396 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:10:06.900401 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:10:06.900407 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:10:06.900412 | orchestrator | 2026-03-31 02:10:06.900418 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-03-31 02:10:06.900424 | orchestrator | Tuesday 31 March 2026 02:09:37 +0000 (0:00:01.731) 0:07:16.969 ********* 2026-03-31 02:10:06.900429 | orchestrator | ok: [testbed-manager] 2026-03-31 02:10:06.900435 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:10:06.900477 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:10:06.900483 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:10:06.900489 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:10:06.900494 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:10:06.900500 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:10:06.900505 | orchestrator | 2026-03-31 02:10:06.900511 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-31 02:10:06.900517 | orchestrator | Tuesday 31 March 2026 02:09:39 +0000 (0:00:01.722) 0:07:18.692 ********* 2026-03-31 02:10:06.900523 | orchestrator | ok: [testbed-manager] 2026-03-31 02:10:06.900529 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:10:06.900534 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:10:06.900559 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:10:06.900565 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:10:06.900571 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:10:06.900576 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:10:06.900581 | orchestrator | 2026-03-31 02:10:06.900587 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-31 02:10:06.900593 | orchestrator | Tuesday 31 March 2026 02:09:39 +0000 (0:00:00.928) 0:07:19.620 ********* 2026-03-31 02:10:06.900598 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:10:06.900604 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:10:06.900614 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:10:06.900623 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:10:06.900632 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:10:06.900641 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:10:06.900650 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:10:06.900659 | orchestrator | 2026-03-31 02:10:06.900668 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-03-31 02:10:06.900677 | orchestrator | Tuesday 31 March 2026 02:09:41 +0000 (0:00:01.180) 0:07:20.800 ********* 2026-03-31 02:10:06.900686 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:10:06.900695 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:10:06.900704 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:10:06.900713 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:10:06.900722 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:10:06.900731 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:10:06.900740 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:10:06.900749 | orchestrator | 2026-03-31 02:10:06.900758 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-03-31 02:10:06.900768 | orchestrator | Tuesday 31 March 2026 02:09:41 +0000 (0:00:00.549) 0:07:21.350 ********* 2026-03-31 02:10:06.900777 | orchestrator | ok: [testbed-manager] 2026-03-31 02:10:06.900801 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:10:06.900812 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:10:06.900821 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:10:06.900830 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:10:06.900838 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:10:06.900853 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:10:06.900862 | orchestrator | 2026-03-31 02:10:06.900871 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-03-31 02:10:06.900881 | orchestrator | Tuesday 31 March 2026 02:09:42 +0000 (0:00:00.558) 0:07:21.908 ********* 2026-03-31 02:10:06.900891 | orchestrator | ok: [testbed-manager] 2026-03-31 02:10:06.900901 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:10:06.900910 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:10:06.900920 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:10:06.900930 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:10:06.900940 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:10:06.900950 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:10:06.900959 | orchestrator | 2026-03-31 02:10:06.900969 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-03-31 02:10:06.900979 | orchestrator | Tuesday 31 March 2026 02:09:42 +0000 (0:00:00.557) 0:07:22.465 ********* 2026-03-31 02:10:06.900989 | orchestrator | ok: [testbed-manager] 2026-03-31 02:10:06.900999 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:10:06.901008 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:10:06.901018 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:10:06.901026 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:10:06.901035 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:10:06.901043 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:10:06.901052 | orchestrator | 2026-03-31 02:10:06.901061 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-03-31 02:10:06.901068 | orchestrator | Tuesday 31 March 2026 02:09:43 +0000 (0:00:00.767) 0:07:23.233 ********* 2026-03-31 02:10:06.901077 | orchestrator | ok: [testbed-manager] 2026-03-31 02:10:06.901085 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:10:06.901101 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:10:06.901109 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:10:06.901116 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:10:06.901124 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:10:06.901133 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:10:06.901141 | orchestrator | 2026-03-31 02:10:06.901166 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-03-31 02:10:06.901175 | orchestrator | Tuesday 31 March 2026 02:09:49 +0000 (0:00:05.732) 0:07:28.966 ********* 2026-03-31 02:10:06.901182 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:10:06.901187 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:10:06.901192 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:10:06.901197 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:10:06.901202 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:10:06.901206 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:10:06.901211 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:10:06.901216 | orchestrator | 2026-03-31 02:10:06.901221 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-03-31 02:10:06.901226 | orchestrator | Tuesday 31 March 2026 02:09:49 +0000 (0:00:00.579) 0:07:29.545 ********* 2026-03-31 02:10:06.901232 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:10:06.901239 | orchestrator | 2026-03-31 02:10:06.901244 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-03-31 02:10:06.901249 | orchestrator | Tuesday 31 March 2026 02:09:50 +0000 (0:00:01.061) 0:07:30.607 ********* 2026-03-31 02:10:06.901254 | orchestrator | ok: [testbed-manager] 2026-03-31 02:10:06.901259 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:10:06.901264 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:10:06.901269 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:10:06.901273 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:10:06.901278 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:10:06.901283 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:10:06.901288 | orchestrator | 2026-03-31 02:10:06.901293 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-03-31 02:10:06.901298 | orchestrator | Tuesday 31 March 2026 02:09:52 +0000 (0:00:01.911) 0:07:32.518 ********* 2026-03-31 02:10:06.901302 | orchestrator | ok: [testbed-manager] 2026-03-31 02:10:06.901307 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:10:06.901312 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:10:06.901317 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:10:06.901322 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:10:06.901326 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:10:06.901331 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:10:06.901336 | orchestrator | 2026-03-31 02:10:06.901341 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-03-31 02:10:06.901346 | orchestrator | Tuesday 31 March 2026 02:09:54 +0000 (0:00:01.148) 0:07:33.667 ********* 2026-03-31 02:10:06.901351 | orchestrator | ok: [testbed-manager] 2026-03-31 02:10:06.901355 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:10:06.901360 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:10:06.901365 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:10:06.901369 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:10:06.901374 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:10:06.901379 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:10:06.901384 | orchestrator | 2026-03-31 02:10:06.901389 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-03-31 02:10:06.901393 | orchestrator | Tuesday 31 March 2026 02:09:54 +0000 (0:00:00.865) 0:07:34.533 ********* 2026-03-31 02:10:06.901399 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-31 02:10:06.901405 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-31 02:10:06.901415 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-31 02:10:06.901420 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-31 02:10:06.901429 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-31 02:10:06.901434 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-31 02:10:06.901462 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-31 02:10:06.901470 | orchestrator | 2026-03-31 02:10:06.901478 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-03-31 02:10:06.901483 | orchestrator | Tuesday 31 March 2026 02:09:56 +0000 (0:00:01.990) 0:07:36.523 ********* 2026-03-31 02:10:06.901488 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:10:06.901493 | orchestrator | 2026-03-31 02:10:06.901498 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-03-31 02:10:06.901503 | orchestrator | Tuesday 31 March 2026 02:09:57 +0000 (0:00:00.875) 0:07:37.399 ********* 2026-03-31 02:10:06.901508 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:10:06.901513 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:10:06.901518 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:10:06.901523 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:10:06.901528 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:10:06.901533 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:10:06.901537 | orchestrator | changed: [testbed-manager] 2026-03-31 02:10:06.901542 | orchestrator | 2026-03-31 02:10:06.901551 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-03-31 02:10:38.057677 | orchestrator | Tuesday 31 March 2026 02:10:06 +0000 (0:00:09.133) 0:07:46.532 ********* 2026-03-31 02:10:38.057844 | orchestrator | ok: [testbed-manager] 2026-03-31 02:10:38.057863 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:10:38.057875 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:10:38.057887 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:10:38.057897 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:10:38.057908 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:10:38.057919 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:10:38.057930 | orchestrator | 2026-03-31 02:10:38.057942 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-03-31 02:10:38.057953 | orchestrator | Tuesday 31 March 2026 02:10:08 +0000 (0:00:01.959) 0:07:48.492 ********* 2026-03-31 02:10:38.057964 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:10:38.057975 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:10:38.057986 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:10:38.057997 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:10:38.058008 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:10:38.058078 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:10:38.058091 | orchestrator | 2026-03-31 02:10:38.058103 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-03-31 02:10:38.058115 | orchestrator | Tuesday 31 March 2026 02:10:10 +0000 (0:00:01.366) 0:07:49.858 ********* 2026-03-31 02:10:38.058126 | orchestrator | changed: [testbed-manager] 2026-03-31 02:10:38.058140 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:10:38.058153 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:10:38.058166 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:10:38.058179 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:10:38.058216 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:10:38.058229 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:10:38.058241 | orchestrator | 2026-03-31 02:10:38.058254 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-03-31 02:10:38.058267 | orchestrator | 2026-03-31 02:10:38.058279 | orchestrator | TASK [Include hardening role] ************************************************** 2026-03-31 02:10:38.058292 | orchestrator | Tuesday 31 March 2026 02:10:11 +0000 (0:00:01.214) 0:07:51.073 ********* 2026-03-31 02:10:38.058305 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:10:38.058317 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:10:38.058329 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:10:38.058342 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:10:38.058355 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:10:38.058367 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:10:38.058379 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:10:38.058392 | orchestrator | 2026-03-31 02:10:38.058404 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-03-31 02:10:38.058417 | orchestrator | 2026-03-31 02:10:38.058430 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-03-31 02:10:38.058441 | orchestrator | Tuesday 31 March 2026 02:10:12 +0000 (0:00:00.759) 0:07:51.833 ********* 2026-03-31 02:10:38.058452 | orchestrator | changed: [testbed-manager] 2026-03-31 02:10:38.058463 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:10:38.058474 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:10:38.058485 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:10:38.058496 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:10:38.058506 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:10:38.058517 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:10:38.058528 | orchestrator | 2026-03-31 02:10:38.058560 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-03-31 02:10:38.058572 | orchestrator | Tuesday 31 March 2026 02:10:13 +0000 (0:00:01.346) 0:07:53.180 ********* 2026-03-31 02:10:38.058583 | orchestrator | ok: [testbed-manager] 2026-03-31 02:10:38.058594 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:10:38.058604 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:10:38.058615 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:10:38.058626 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:10:38.058637 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:10:38.058647 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:10:38.058658 | orchestrator | 2026-03-31 02:10:38.058669 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-03-31 02:10:38.058680 | orchestrator | Tuesday 31 March 2026 02:10:15 +0000 (0:00:01.478) 0:07:54.658 ********* 2026-03-31 02:10:38.058691 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:10:38.058702 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:10:38.058712 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:10:38.058723 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:10:38.058733 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:10:38.058783 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:10:38.058794 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:10:38.058805 | orchestrator | 2026-03-31 02:10:38.058816 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-03-31 02:10:38.058827 | orchestrator | Tuesday 31 March 2026 02:10:15 +0000 (0:00:00.545) 0:07:55.204 ********* 2026-03-31 02:10:38.058839 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:10:38.058851 | orchestrator | 2026-03-31 02:10:38.058862 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-03-31 02:10:38.058873 | orchestrator | Tuesday 31 March 2026 02:10:16 +0000 (0:00:01.060) 0:07:56.265 ********* 2026-03-31 02:10:38.058885 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:10:38.058909 | orchestrator | 2026-03-31 02:10:38.058920 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-03-31 02:10:38.058931 | orchestrator | Tuesday 31 March 2026 02:10:17 +0000 (0:00:00.844) 0:07:57.109 ********* 2026-03-31 02:10:38.058941 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:10:38.058952 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:10:38.058963 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:10:38.058973 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:10:38.058984 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:10:38.058995 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:10:38.059005 | orchestrator | changed: [testbed-manager] 2026-03-31 02:10:38.059016 | orchestrator | 2026-03-31 02:10:38.059045 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-03-31 02:10:38.059056 | orchestrator | Tuesday 31 March 2026 02:10:26 +0000 (0:00:08.557) 0:08:05.666 ********* 2026-03-31 02:10:38.059067 | orchestrator | changed: [testbed-manager] 2026-03-31 02:10:38.059078 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:10:38.059089 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:10:38.059100 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:10:38.059110 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:10:38.059121 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:10:38.059132 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:10:38.059143 | orchestrator | 2026-03-31 02:10:38.059153 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-03-31 02:10:38.059164 | orchestrator | Tuesday 31 March 2026 02:10:27 +0000 (0:00:01.096) 0:08:06.762 ********* 2026-03-31 02:10:38.059175 | orchestrator | changed: [testbed-manager] 2026-03-31 02:10:38.059186 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:10:38.059197 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:10:38.059207 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:10:38.059218 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:10:38.059228 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:10:38.059239 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:10:38.059250 | orchestrator | 2026-03-31 02:10:38.059261 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-03-31 02:10:38.059271 | orchestrator | Tuesday 31 March 2026 02:10:28 +0000 (0:00:01.399) 0:08:08.162 ********* 2026-03-31 02:10:38.059282 | orchestrator | changed: [testbed-manager] 2026-03-31 02:10:38.059293 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:10:38.059304 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:10:38.059315 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:10:38.059325 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:10:38.059336 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:10:38.059347 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:10:38.059357 | orchestrator | 2026-03-31 02:10:38.059368 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-03-31 02:10:38.059379 | orchestrator | Tuesday 31 March 2026 02:10:30 +0000 (0:00:01.991) 0:08:10.154 ********* 2026-03-31 02:10:38.059390 | orchestrator | changed: [testbed-manager] 2026-03-31 02:10:38.059401 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:10:38.059411 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:10:38.059422 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:10:38.059433 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:10:38.059444 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:10:38.059454 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:10:38.059465 | orchestrator | 2026-03-31 02:10:38.059476 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-03-31 02:10:38.059487 | orchestrator | Tuesday 31 March 2026 02:10:31 +0000 (0:00:01.261) 0:08:11.415 ********* 2026-03-31 02:10:38.059498 | orchestrator | changed: [testbed-manager] 2026-03-31 02:10:38.059509 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:10:38.059527 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:10:38.059566 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:10:38.059578 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:10:38.059588 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:10:38.059599 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:10:38.059610 | orchestrator | 2026-03-31 02:10:38.059621 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-03-31 02:10:38.059632 | orchestrator | 2026-03-31 02:10:38.059643 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-03-31 02:10:38.059654 | orchestrator | Tuesday 31 March 2026 02:10:32 +0000 (0:00:01.130) 0:08:12.545 ********* 2026-03-31 02:10:38.059665 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:10:38.059676 | orchestrator | 2026-03-31 02:10:38.059687 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-31 02:10:38.059697 | orchestrator | Tuesday 31 March 2026 02:10:33 +0000 (0:00:00.888) 0:08:13.434 ********* 2026-03-31 02:10:38.059708 | orchestrator | ok: [testbed-manager] 2026-03-31 02:10:38.059719 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:10:38.059730 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:10:38.059741 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:10:38.059751 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:10:38.059762 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:10:38.059778 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:10:38.059789 | orchestrator | 2026-03-31 02:10:38.059800 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-31 02:10:38.059811 | orchestrator | Tuesday 31 March 2026 02:10:34 +0000 (0:00:01.064) 0:08:14.499 ********* 2026-03-31 02:10:38.059822 | orchestrator | changed: [testbed-manager] 2026-03-31 02:10:38.059833 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:10:38.059844 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:10:38.059855 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:10:38.059865 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:10:38.059876 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:10:38.059887 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:10:38.059898 | orchestrator | 2026-03-31 02:10:38.059909 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-03-31 02:10:38.059920 | orchestrator | Tuesday 31 March 2026 02:10:36 +0000 (0:00:01.200) 0:08:15.699 ********* 2026-03-31 02:10:38.059931 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:10:38.059942 | orchestrator | 2026-03-31 02:10:38.059953 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-31 02:10:38.059964 | orchestrator | Tuesday 31 March 2026 02:10:37 +0000 (0:00:01.092) 0:08:16.792 ********* 2026-03-31 02:10:38.059975 | orchestrator | ok: [testbed-manager] 2026-03-31 02:10:38.059985 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:10:38.059996 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:10:38.060007 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:10:38.060018 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:10:38.060029 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:10:38.060040 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:10:38.060050 | orchestrator | 2026-03-31 02:10:38.060068 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-31 02:10:39.719180 | orchestrator | Tuesday 31 March 2026 02:10:38 +0000 (0:00:00.895) 0:08:17.687 ********* 2026-03-31 02:10:39.719272 | orchestrator | changed: [testbed-manager] 2026-03-31 02:10:39.719282 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:10:39.719289 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:10:39.719295 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:10:39.719301 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:10:39.719308 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:10:39.719314 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:10:39.719344 | orchestrator | 2026-03-31 02:10:39.719351 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 02:10:39.719360 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-31 02:10:39.719367 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-31 02:10:39.719373 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-31 02:10:39.719379 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-31 02:10:39.719385 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-03-31 02:10:39.719391 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-31 02:10:39.719396 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-31 02:10:39.719402 | orchestrator | 2026-03-31 02:10:39.719408 | orchestrator | 2026-03-31 02:10:39.719414 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 02:10:39.719419 | orchestrator | Tuesday 31 March 2026 02:10:39 +0000 (0:00:01.132) 0:08:18.820 ********* 2026-03-31 02:10:39.719425 | orchestrator | =============================================================================== 2026-03-31 02:10:39.719431 | orchestrator | osism.commons.packages : Install required packages --------------------- 78.75s 2026-03-31 02:10:39.719437 | orchestrator | osism.commons.packages : Download required packages -------------------- 38.88s 2026-03-31 02:10:39.719443 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 35.51s 2026-03-31 02:10:39.719449 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.71s 2026-03-31 02:10:39.719455 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.63s 2026-03-31 02:10:39.719462 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.52s 2026-03-31 02:10:39.719468 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.44s 2026-03-31 02:10:39.719475 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.55s 2026-03-31 02:10:39.719480 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.33s 2026-03-31 02:10:39.719486 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.13s 2026-03-31 02:10:39.719492 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.87s 2026-03-31 02:10:39.719499 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.56s 2026-03-31 02:10:39.719505 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.27s 2026-03-31 02:10:39.719526 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.00s 2026-03-31 02:10:39.719532 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.95s 2026-03-31 02:10:39.719538 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.48s 2026-03-31 02:10:39.719589 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.58s 2026-03-31 02:10:39.719595 | orchestrator | osism.commons.services : Populate service facts ------------------------- 6.00s 2026-03-31 02:10:39.719601 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.88s 2026-03-31 02:10:39.719607 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.86s 2026-03-31 02:10:40.067390 | orchestrator | + osism apply fail2ban 2026-03-31 02:10:53.276248 | orchestrator | 2026-03-31 02:10:53 | INFO  | Task 35342857-60fe-4f79-89bf-9e6a70d384fe (fail2ban) was prepared for execution. 2026-03-31 02:10:53.276378 | orchestrator | 2026-03-31 02:10:53 | INFO  | It takes a moment until task 35342857-60fe-4f79-89bf-9e6a70d384fe (fail2ban) has been started and output is visible here. 2026-03-31 02:11:16.540396 | orchestrator | 2026-03-31 02:11:16.540522 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-03-31 02:11:16.540539 | orchestrator | 2026-03-31 02:11:16.540552 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-03-31 02:11:16.540564 | orchestrator | Tuesday 31 March 2026 02:10:58 +0000 (0:00:00.306) 0:00:00.306 ********* 2026-03-31 02:11:16.540577 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 02:11:16.540591 | orchestrator | 2026-03-31 02:11:16.540602 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-03-31 02:11:16.540613 | orchestrator | Tuesday 31 March 2026 02:10:59 +0000 (0:00:01.275) 0:00:01.581 ********* 2026-03-31 02:11:16.540624 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:11:16.540636 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:11:16.540647 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:11:16.540742 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:11:16.540760 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:11:16.540778 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:11:16.540795 | orchestrator | changed: [testbed-manager] 2026-03-31 02:11:16.540812 | orchestrator | 2026-03-31 02:11:16.540830 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-03-31 02:11:16.540847 | orchestrator | Tuesday 31 March 2026 02:11:11 +0000 (0:00:11.664) 0:00:13.245 ********* 2026-03-31 02:11:16.540863 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:11:16.540881 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:11:16.540898 | orchestrator | changed: [testbed-manager] 2026-03-31 02:11:16.540916 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:11:16.540934 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:11:16.540954 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:11:16.540968 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:11:16.540980 | orchestrator | 2026-03-31 02:11:16.540992 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-03-31 02:11:16.541005 | orchestrator | Tuesday 31 March 2026 02:11:12 +0000 (0:00:01.573) 0:00:14.819 ********* 2026-03-31 02:11:16.541018 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:11:16.541032 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:11:16.541044 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:11:16.541056 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:11:16.541069 | orchestrator | ok: [testbed-manager] 2026-03-31 02:11:16.541081 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:11:16.541093 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:11:16.541105 | orchestrator | 2026-03-31 02:11:16.541118 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-03-31 02:11:16.541131 | orchestrator | Tuesday 31 March 2026 02:11:14 +0000 (0:00:01.512) 0:00:16.331 ********* 2026-03-31 02:11:16.541143 | orchestrator | changed: [testbed-manager] 2026-03-31 02:11:16.541155 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:11:16.541167 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:11:16.541180 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:11:16.541192 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:11:16.541204 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:11:16.541216 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:11:16.541229 | orchestrator | 2026-03-31 02:11:16.541241 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 02:11:16.541254 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 02:11:16.541298 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 02:11:16.541311 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 02:11:16.541325 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 02:11:16.541337 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 02:11:16.541350 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 02:11:16.541361 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 02:11:16.541372 | orchestrator | 2026-03-31 02:11:16.541383 | orchestrator | 2026-03-31 02:11:16.541394 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 02:11:16.541405 | orchestrator | Tuesday 31 March 2026 02:11:16 +0000 (0:00:01.806) 0:00:18.137 ********* 2026-03-31 02:11:16.541416 | orchestrator | =============================================================================== 2026-03-31 02:11:16.541426 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.66s 2026-03-31 02:11:16.541437 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.81s 2026-03-31 02:11:16.541447 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.57s 2026-03-31 02:11:16.541458 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.51s 2026-03-31 02:11:16.541469 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.28s 2026-03-31 02:11:17.096787 | orchestrator | + osism apply network 2026-03-31 02:11:29.506326 | orchestrator | 2026-03-31 02:11:29 | INFO  | Task b7b6bd94-d60d-40be-b83f-1b55057a9355 (network) was prepared for execution. 2026-03-31 02:11:29.506431 | orchestrator | 2026-03-31 02:11:29 | INFO  | It takes a moment until task b7b6bd94-d60d-40be-b83f-1b55057a9355 (network) has been started and output is visible here. 2026-03-31 02:11:59.828030 | orchestrator | 2026-03-31 02:11:59.828145 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-03-31 02:11:59.828160 | orchestrator | 2026-03-31 02:11:59.828169 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-03-31 02:11:59.828178 | orchestrator | Tuesday 31 March 2026 02:11:34 +0000 (0:00:00.308) 0:00:00.308 ********* 2026-03-31 02:11:59.828185 | orchestrator | ok: [testbed-manager] 2026-03-31 02:11:59.828195 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:11:59.828202 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:11:59.828210 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:11:59.828218 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:11:59.828225 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:11:59.828233 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:11:59.828241 | orchestrator | 2026-03-31 02:11:59.828249 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-03-31 02:11:59.828258 | orchestrator | Tuesday 31 March 2026 02:11:35 +0000 (0:00:00.886) 0:00:01.195 ********* 2026-03-31 02:11:59.828268 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 02:11:59.828278 | orchestrator | 2026-03-31 02:11:59.828286 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-03-31 02:11:59.828293 | orchestrator | Tuesday 31 March 2026 02:11:36 +0000 (0:00:01.338) 0:00:02.534 ********* 2026-03-31 02:11:59.828325 | orchestrator | ok: [testbed-manager] 2026-03-31 02:11:59.828334 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:11:59.828342 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:11:59.828350 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:11:59.828357 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:11:59.828364 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:11:59.828371 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:11:59.828379 | orchestrator | 2026-03-31 02:11:59.828386 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-03-31 02:11:59.828393 | orchestrator | Tuesday 31 March 2026 02:11:38 +0000 (0:00:02.030) 0:00:04.565 ********* 2026-03-31 02:11:59.828400 | orchestrator | ok: [testbed-manager] 2026-03-31 02:11:59.828408 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:11:59.828416 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:11:59.828424 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:11:59.828432 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:11:59.828439 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:11:59.828447 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:11:59.828454 | orchestrator | 2026-03-31 02:11:59.828461 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-03-31 02:11:59.828469 | orchestrator | Tuesday 31 March 2026 02:11:40 +0000 (0:00:01.836) 0:00:06.401 ********* 2026-03-31 02:11:59.828476 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-03-31 02:11:59.828484 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-03-31 02:11:59.828491 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-03-31 02:11:59.828500 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-03-31 02:11:59.828508 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-03-31 02:11:59.828515 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-03-31 02:11:59.828522 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-03-31 02:11:59.828528 | orchestrator | 2026-03-31 02:11:59.828554 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-03-31 02:11:59.828561 | orchestrator | Tuesday 31 March 2026 02:11:41 +0000 (0:00:01.005) 0:00:07.406 ********* 2026-03-31 02:11:59.828569 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-31 02:11:59.828579 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-31 02:11:59.828587 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-31 02:11:59.828595 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-31 02:11:59.828603 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-31 02:11:59.828611 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-31 02:11:59.828619 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-31 02:11:59.828627 | orchestrator | 2026-03-31 02:11:59.828635 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-03-31 02:11:59.828644 | orchestrator | Tuesday 31 March 2026 02:11:44 +0000 (0:00:03.471) 0:00:10.877 ********* 2026-03-31 02:11:59.828653 | orchestrator | changed: [testbed-manager] 2026-03-31 02:11:59.828661 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:11:59.828669 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:11:59.828678 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:11:59.828686 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:11:59.828698 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:11:59.828706 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:11:59.828714 | orchestrator | 2026-03-31 02:11:59.828721 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-03-31 02:11:59.828729 | orchestrator | Tuesday 31 March 2026 02:11:46 +0000 (0:00:01.668) 0:00:12.546 ********* 2026-03-31 02:11:59.828737 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-31 02:11:59.828745 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-31 02:11:59.828752 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-31 02:11:59.828760 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-31 02:11:59.828769 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-31 02:11:59.828783 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-31 02:11:59.828844 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-31 02:11:59.828854 | orchestrator | 2026-03-31 02:11:59.828862 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-03-31 02:11:59.828871 | orchestrator | Tuesday 31 March 2026 02:11:48 +0000 (0:00:01.807) 0:00:14.353 ********* 2026-03-31 02:11:59.828878 | orchestrator | ok: [testbed-manager] 2026-03-31 02:11:59.828886 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:11:59.828893 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:11:59.828901 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:11:59.828909 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:11:59.828917 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:11:59.828924 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:11:59.828931 | orchestrator | 2026-03-31 02:11:59.828939 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-03-31 02:11:59.828967 | orchestrator | Tuesday 31 March 2026 02:11:49 +0000 (0:00:01.228) 0:00:15.582 ********* 2026-03-31 02:11:59.828975 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:11:59.828982 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:11:59.828991 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:11:59.828999 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:11:59.829006 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:11:59.829014 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:11:59.829021 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:11:59.829029 | orchestrator | 2026-03-31 02:11:59.829037 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-03-31 02:11:59.829045 | orchestrator | Tuesday 31 March 2026 02:11:50 +0000 (0:00:00.709) 0:00:16.291 ********* 2026-03-31 02:11:59.829053 | orchestrator | ok: [testbed-manager] 2026-03-31 02:11:59.829061 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:11:59.829068 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:11:59.829076 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:11:59.829083 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:11:59.829090 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:11:59.829097 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:11:59.829104 | orchestrator | 2026-03-31 02:11:59.829112 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-03-31 02:11:59.829119 | orchestrator | Tuesday 31 March 2026 02:11:52 +0000 (0:00:02.240) 0:00:18.532 ********* 2026-03-31 02:11:59.829127 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:11:59.829135 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:11:59.829143 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:11:59.829150 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:11:59.829158 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:11:59.829166 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:11:59.829176 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-03-31 02:11:59.829185 | orchestrator | 2026-03-31 02:11:59.829194 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-03-31 02:11:59.829202 | orchestrator | Tuesday 31 March 2026 02:11:53 +0000 (0:00:00.973) 0:00:19.505 ********* 2026-03-31 02:11:59.829211 | orchestrator | ok: [testbed-manager] 2026-03-31 02:11:59.829218 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:11:59.829227 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:11:59.829235 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:11:59.829244 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:11:59.829252 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:11:59.829259 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:11:59.829267 | orchestrator | 2026-03-31 02:11:59.829276 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-03-31 02:11:59.829284 | orchestrator | Tuesday 31 March 2026 02:11:55 +0000 (0:00:01.699) 0:00:21.205 ********* 2026-03-31 02:11:59.829293 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 02:11:59.829311 | orchestrator | 2026-03-31 02:11:59.829319 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-31 02:11:59.829327 | orchestrator | Tuesday 31 March 2026 02:11:56 +0000 (0:00:01.338) 0:00:22.543 ********* 2026-03-31 02:11:59.829335 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:11:59.829344 | orchestrator | ok: [testbed-manager] 2026-03-31 02:11:59.829352 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:11:59.829360 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:11:59.829367 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:11:59.829376 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:11:59.829384 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:11:59.829391 | orchestrator | 2026-03-31 02:11:59.829399 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-03-31 02:11:59.829406 | orchestrator | Tuesday 31 March 2026 02:11:57 +0000 (0:00:01.180) 0:00:23.723 ********* 2026-03-31 02:11:59.829414 | orchestrator | ok: [testbed-manager] 2026-03-31 02:11:59.829421 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:11:59.829429 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:11:59.829436 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:11:59.829444 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:11:59.829451 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:11:59.829459 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:11:59.829467 | orchestrator | 2026-03-31 02:11:59.829474 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-31 02:11:59.829482 | orchestrator | Tuesday 31 March 2026 02:11:58 +0000 (0:00:00.697) 0:00:24.421 ********* 2026-03-31 02:11:59.829494 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-03-31 02:11:59.829502 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-03-31 02:11:59.829510 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-03-31 02:11:59.829518 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-03-31 02:11:59.829525 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-31 02:11:59.829532 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-03-31 02:11:59.829540 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-31 02:11:59.829547 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-03-31 02:11:59.829555 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-31 02:11:59.829563 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-31 02:11:59.829570 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-31 02:11:59.829578 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-31 02:11:59.829585 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-03-31 02:11:59.829593 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-31 02:11:59.829600 | orchestrator | 2026-03-31 02:11:59.829613 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-03-31 02:12:17.609095 | orchestrator | Tuesday 31 March 2026 02:11:59 +0000 (0:00:01.341) 0:00:25.763 ********* 2026-03-31 02:12:17.609219 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:12:17.609236 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:12:17.609246 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:12:17.609256 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:12:17.609266 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:12:17.609276 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:12:17.609285 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:12:17.609295 | orchestrator | 2026-03-31 02:12:17.609306 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-03-31 02:12:17.609339 | orchestrator | Tuesday 31 March 2026 02:12:00 +0000 (0:00:00.666) 0:00:26.430 ********* 2026-03-31 02:12:17.609350 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-node-2, testbed-node-1, testbed-node-5, testbed-manager, testbed-node-4, testbed-node-3 2026-03-31 02:12:17.609362 | orchestrator | 2026-03-31 02:12:17.609372 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-03-31 02:12:17.609382 | orchestrator | Tuesday 31 March 2026 02:12:05 +0000 (0:00:04.978) 0:00:31.408 ********* 2026-03-31 02:12:17.609393 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-31 02:12:17.609405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-31 02:12:17.609417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-31 02:12:17.609427 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-31 02:12:17.609436 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-31 02:12:17.609446 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-31 02:12:17.609456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-31 02:12:17.609479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-31 02:12:17.609489 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-31 02:12:17.609504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-31 02:12:17.609526 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-31 02:12:17.609564 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-31 02:12:17.609590 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-31 02:12:17.609601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-31 02:12:17.609611 | orchestrator | 2026-03-31 02:12:17.609621 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-03-31 02:12:17.609633 | orchestrator | Tuesday 31 March 2026 02:12:11 +0000 (0:00:06.048) 0:00:37.456 ********* 2026-03-31 02:12:17.609644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-31 02:12:17.609656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-31 02:12:17.609667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-31 02:12:17.609679 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-31 02:12:17.609690 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-31 02:12:17.609701 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-31 02:12:17.609713 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-31 02:12:17.609725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-31 02:12:17.609741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-31 02:12:17.609753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-31 02:12:17.609764 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-31 02:12:17.609782 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-31 02:12:17.609805 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-31 02:12:24.293543 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-31 02:12:24.293686 | orchestrator | 2026-03-31 02:12:24.293717 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-03-31 02:12:24.293739 | orchestrator | Tuesday 31 March 2026 02:12:17 +0000 (0:00:06.085) 0:00:43.542 ********* 2026-03-31 02:12:24.293761 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 02:12:24.293781 | orchestrator | 2026-03-31 02:12:24.293798 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-31 02:12:24.293816 | orchestrator | Tuesday 31 March 2026 02:12:18 +0000 (0:00:01.368) 0:00:44.910 ********* 2026-03-31 02:12:24.293836 | orchestrator | ok: [testbed-manager] 2026-03-31 02:12:24.293856 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:12:24.293969 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:12:24.293982 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:12:24.293993 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:12:24.294003 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:12:24.294014 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:12:24.294095 | orchestrator | 2026-03-31 02:12:24.294108 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-31 02:12:24.294141 | orchestrator | Tuesday 31 March 2026 02:12:20 +0000 (0:00:01.249) 0:00:46.160 ********* 2026-03-31 02:12:24.294155 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-31 02:12:24.294169 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-31 02:12:24.294182 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-31 02:12:24.294193 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-31 02:12:24.294204 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-31 02:12:24.294215 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-31 02:12:24.294227 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-31 02:12:24.294238 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-31 02:12:24.294249 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:12:24.294261 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-31 02:12:24.294271 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-31 02:12:24.294283 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-31 02:12:24.294294 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-31 02:12:24.294304 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:12:24.294315 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-31 02:12:24.294353 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-31 02:12:24.294364 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:12:24.294375 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-31 02:12:24.294386 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-31 02:12:24.294397 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-31 02:12:24.294423 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-31 02:12:24.294435 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-31 02:12:24.294446 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-31 02:12:24.294457 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:12:24.294467 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-31 02:12:24.294478 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-31 02:12:24.294489 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-31 02:12:24.294499 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-31 02:12:24.294510 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:12:24.294521 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:12:24.294532 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-31 02:12:24.294542 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-31 02:12:24.294553 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-31 02:12:24.294564 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-31 02:12:24.294574 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:12:24.294585 | orchestrator | 2026-03-31 02:12:24.294596 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-03-31 02:12:24.294630 | orchestrator | Tuesday 31 March 2026 02:12:22 +0000 (0:00:02.115) 0:00:48.276 ********* 2026-03-31 02:12:24.294641 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:12:24.294652 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:12:24.294663 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:12:24.294674 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:12:24.294684 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:12:24.294695 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:12:24.294705 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:12:24.294716 | orchestrator | 2026-03-31 02:12:24.294727 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-03-31 02:12:24.294737 | orchestrator | Tuesday 31 March 2026 02:12:23 +0000 (0:00:00.727) 0:00:49.004 ********* 2026-03-31 02:12:24.294748 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:12:24.294759 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:12:24.294770 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:12:24.294780 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:12:24.294792 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:12:24.294803 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:12:24.294814 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:12:24.294824 | orchestrator | 2026-03-31 02:12:24.294835 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 02:12:24.294847 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-31 02:12:24.294859 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-31 02:12:24.294898 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-31 02:12:24.294910 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-31 02:12:24.294921 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-31 02:12:24.294931 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-31 02:12:24.294942 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-31 02:12:24.294953 | orchestrator | 2026-03-31 02:12:24.294964 | orchestrator | 2026-03-31 02:12:24.294974 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 02:12:24.294985 | orchestrator | Tuesday 31 March 2026 02:12:23 +0000 (0:00:00.752) 0:00:49.756 ********* 2026-03-31 02:12:24.294996 | orchestrator | =============================================================================== 2026-03-31 02:12:24.295007 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 6.09s 2026-03-31 02:12:24.295017 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 6.05s 2026-03-31 02:12:24.295028 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.98s 2026-03-31 02:12:24.295039 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.47s 2026-03-31 02:12:24.295049 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.24s 2026-03-31 02:12:24.295060 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.12s 2026-03-31 02:12:24.295071 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.03s 2026-03-31 02:12:24.295081 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.84s 2026-03-31 02:12:24.295099 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.81s 2026-03-31 02:12:24.295109 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.70s 2026-03-31 02:12:24.295120 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.67s 2026-03-31 02:12:24.295131 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.37s 2026-03-31 02:12:24.295141 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.34s 2026-03-31 02:12:24.295152 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.34s 2026-03-31 02:12:24.295163 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.34s 2026-03-31 02:12:24.295173 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.25s 2026-03-31 02:12:24.295184 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.23s 2026-03-31 02:12:24.295194 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.18s 2026-03-31 02:12:24.295205 | orchestrator | osism.commons.network : Create required directories --------------------- 1.01s 2026-03-31 02:12:24.295216 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.97s 2026-03-31 02:12:24.656151 | orchestrator | + osism apply wireguard 2026-03-31 02:12:36.764482 | orchestrator | 2026-03-31 02:12:36 | INFO  | Task 71699e01-755e-4e60-9c92-705883918398 (wireguard) was prepared for execution. 2026-03-31 02:12:36.764569 | orchestrator | 2026-03-31 02:12:36 | INFO  | It takes a moment until task 71699e01-755e-4e60-9c92-705883918398 (wireguard) has been started and output is visible here. 2026-03-31 02:12:58.571805 | orchestrator | 2026-03-31 02:12:58.571896 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-03-31 02:12:58.571923 | orchestrator | 2026-03-31 02:12:58.571930 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-03-31 02:12:58.571936 | orchestrator | Tuesday 31 March 2026 02:12:41 +0000 (0:00:00.267) 0:00:00.267 ********* 2026-03-31 02:12:58.571942 | orchestrator | ok: [testbed-manager] 2026-03-31 02:12:58.571949 | orchestrator | 2026-03-31 02:12:58.571954 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-03-31 02:12:58.571960 | orchestrator | Tuesday 31 March 2026 02:12:43 +0000 (0:00:01.603) 0:00:01.871 ********* 2026-03-31 02:12:58.571966 | orchestrator | changed: [testbed-manager] 2026-03-31 02:12:58.572013 | orchestrator | 2026-03-31 02:12:58.572027 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-03-31 02:12:58.572037 | orchestrator | Tuesday 31 March 2026 02:12:50 +0000 (0:00:07.018) 0:00:08.890 ********* 2026-03-31 02:12:58.572046 | orchestrator | changed: [testbed-manager] 2026-03-31 02:12:58.572055 | orchestrator | 2026-03-31 02:12:58.572063 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-03-31 02:12:58.572071 | orchestrator | Tuesday 31 March 2026 02:12:50 +0000 (0:00:00.581) 0:00:09.471 ********* 2026-03-31 02:12:58.572079 | orchestrator | changed: [testbed-manager] 2026-03-31 02:12:58.572089 | orchestrator | 2026-03-31 02:12:58.572097 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-03-31 02:12:58.572106 | orchestrator | Tuesday 31 March 2026 02:12:51 +0000 (0:00:00.460) 0:00:09.932 ********* 2026-03-31 02:12:58.572115 | orchestrator | ok: [testbed-manager] 2026-03-31 02:12:58.572123 | orchestrator | 2026-03-31 02:12:58.572132 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-03-31 02:12:58.572142 | orchestrator | Tuesday 31 March 2026 02:12:52 +0000 (0:00:00.753) 0:00:10.685 ********* 2026-03-31 02:12:58.572150 | orchestrator | ok: [testbed-manager] 2026-03-31 02:12:58.572155 | orchestrator | 2026-03-31 02:12:58.572161 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-03-31 02:12:58.572167 | orchestrator | Tuesday 31 March 2026 02:12:52 +0000 (0:00:00.450) 0:00:11.136 ********* 2026-03-31 02:12:58.572172 | orchestrator | ok: [testbed-manager] 2026-03-31 02:12:58.572178 | orchestrator | 2026-03-31 02:12:58.572183 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-03-31 02:12:58.572189 | orchestrator | Tuesday 31 March 2026 02:12:53 +0000 (0:00:00.492) 0:00:11.629 ********* 2026-03-31 02:12:58.572194 | orchestrator | changed: [testbed-manager] 2026-03-31 02:12:58.572200 | orchestrator | 2026-03-31 02:12:58.572205 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-03-31 02:12:58.572211 | orchestrator | Tuesday 31 March 2026 02:12:54 +0000 (0:00:01.270) 0:00:12.900 ********* 2026-03-31 02:12:58.572216 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-31 02:12:58.572222 | orchestrator | changed: [testbed-manager] 2026-03-31 02:12:58.572227 | orchestrator | 2026-03-31 02:12:58.572232 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-03-31 02:12:58.572238 | orchestrator | Tuesday 31 March 2026 02:12:55 +0000 (0:00:00.991) 0:00:13.891 ********* 2026-03-31 02:12:58.572243 | orchestrator | changed: [testbed-manager] 2026-03-31 02:12:58.572248 | orchestrator | 2026-03-31 02:12:58.572254 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-03-31 02:12:58.572260 | orchestrator | Tuesday 31 March 2026 02:12:57 +0000 (0:00:01.807) 0:00:15.699 ********* 2026-03-31 02:12:58.572265 | orchestrator | changed: [testbed-manager] 2026-03-31 02:12:58.572271 | orchestrator | 2026-03-31 02:12:58.572276 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 02:12:58.572282 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 02:12:58.572289 | orchestrator | 2026-03-31 02:12:58.572295 | orchestrator | 2026-03-31 02:12:58.572300 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 02:12:58.572305 | orchestrator | Tuesday 31 March 2026 02:12:58 +0000 (0:00:00.986) 0:00:16.686 ********* 2026-03-31 02:12:58.572318 | orchestrator | =============================================================================== 2026-03-31 02:12:58.572325 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.02s 2026-03-31 02:12:58.572334 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.81s 2026-03-31 02:12:58.572345 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.60s 2026-03-31 02:12:58.572358 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.27s 2026-03-31 02:12:58.572366 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.99s 2026-03-31 02:12:58.572375 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.99s 2026-03-31 02:12:58.572383 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.75s 2026-03-31 02:12:58.572392 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.58s 2026-03-31 02:12:58.572401 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.49s 2026-03-31 02:12:58.572409 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.46s 2026-03-31 02:12:58.572417 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.45s 2026-03-31 02:12:58.931602 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-03-31 02:12:58.959331 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-31 02:12:58.959424 | orchestrator | Dload Upload Total Spent Left Speed 2026-03-31 02:12:59.032041 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 205 0 --:--:-- --:--:-- --:--:-- 208 2026-03-31 02:12:59.045836 | orchestrator | + osism apply --environment custom workarounds 2026-03-31 02:13:01.075609 | orchestrator | 2026-03-31 02:13:01 | INFO  | Trying to run play workarounds in environment custom 2026-03-31 02:13:11.223987 | orchestrator | 2026-03-31 02:13:11 | INFO  | Task f2c1a299-f874-45b9-9b27-0e9591d29c80 (workarounds) was prepared for execution. 2026-03-31 02:13:11.224168 | orchestrator | 2026-03-31 02:13:11 | INFO  | It takes a moment until task f2c1a299-f874-45b9-9b27-0e9591d29c80 (workarounds) has been started and output is visible here. 2026-03-31 02:13:38.067875 | orchestrator | 2026-03-31 02:13:38.067983 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-31 02:13:38.067996 | orchestrator | 2026-03-31 02:13:38.068005 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-03-31 02:13:38.068014 | orchestrator | Tuesday 31 March 2026 02:13:15 +0000 (0:00:00.148) 0:00:00.148 ********* 2026-03-31 02:13:38.068023 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-03-31 02:13:38.068031 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-03-31 02:13:38.068040 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-03-31 02:13:38.068048 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-03-31 02:13:38.068056 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-03-31 02:13:38.068064 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-03-31 02:13:38.068072 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-03-31 02:13:38.068080 | orchestrator | 2026-03-31 02:13:38.068088 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-03-31 02:13:38.068127 | orchestrator | 2026-03-31 02:13:38.068136 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-31 02:13:38.068144 | orchestrator | Tuesday 31 March 2026 02:13:16 +0000 (0:00:00.861) 0:00:01.009 ********* 2026-03-31 02:13:38.068153 | orchestrator | ok: [testbed-manager] 2026-03-31 02:13:38.068162 | orchestrator | 2026-03-31 02:13:38.068190 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-03-31 02:13:38.068198 | orchestrator | 2026-03-31 02:13:38.068206 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-31 02:13:38.068214 | orchestrator | Tuesday 31 March 2026 02:13:19 +0000 (0:00:02.697) 0:00:03.707 ********* 2026-03-31 02:13:38.068222 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:13:38.068230 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:13:38.068238 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:13:38.068245 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:13:38.068253 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:13:38.068260 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:13:38.068268 | orchestrator | 2026-03-31 02:13:38.068276 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-03-31 02:13:38.068284 | orchestrator | 2026-03-31 02:13:38.068292 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-03-31 02:13:38.068300 | orchestrator | Tuesday 31 March 2026 02:13:20 +0000 (0:00:01.833) 0:00:05.540 ********* 2026-03-31 02:13:38.068308 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-31 02:13:38.068317 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-31 02:13:38.068325 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-31 02:13:38.068332 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-31 02:13:38.068340 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-31 02:13:38.068360 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-31 02:13:38.068369 | orchestrator | 2026-03-31 02:13:38.068377 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-03-31 02:13:38.068384 | orchestrator | Tuesday 31 March 2026 02:13:22 +0000 (0:00:01.659) 0:00:07.200 ********* 2026-03-31 02:13:38.068392 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:13:38.068401 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:13:38.068408 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:13:38.068416 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:13:38.068424 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:13:38.068431 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:13:38.068442 | orchestrator | 2026-03-31 02:13:38.068455 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-03-31 02:13:38.068468 | orchestrator | Tuesday 31 March 2026 02:13:26 +0000 (0:00:03.919) 0:00:11.119 ********* 2026-03-31 02:13:38.068481 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:13:38.068494 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:13:38.068507 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:13:38.068519 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:13:38.068531 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:13:38.068543 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:13:38.068555 | orchestrator | 2026-03-31 02:13:38.068568 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-03-31 02:13:38.068581 | orchestrator | 2026-03-31 02:13:38.068594 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-03-31 02:13:38.068608 | orchestrator | Tuesday 31 March 2026 02:13:27 +0000 (0:00:00.753) 0:00:11.873 ********* 2026-03-31 02:13:38.068621 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:13:38.068634 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:13:38.068645 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:13:38.068653 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:13:38.068660 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:13:38.068668 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:13:38.068676 | orchestrator | changed: [testbed-manager] 2026-03-31 02:13:38.068692 | orchestrator | 2026-03-31 02:13:38.068700 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-03-31 02:13:38.068708 | orchestrator | Tuesday 31 March 2026 02:13:28 +0000 (0:00:01.653) 0:00:13.526 ********* 2026-03-31 02:13:38.068716 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:13:38.068724 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:13:38.068732 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:13:38.068739 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:13:38.068747 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:13:38.068755 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:13:38.068779 | orchestrator | changed: [testbed-manager] 2026-03-31 02:13:38.068787 | orchestrator | 2026-03-31 02:13:38.068795 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-03-31 02:13:38.068803 | orchestrator | Tuesday 31 March 2026 02:13:30 +0000 (0:00:01.755) 0:00:15.281 ********* 2026-03-31 02:13:38.068811 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:13:38.068824 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:13:38.068837 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:13:38.068849 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:13:38.068864 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:13:38.068878 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:13:38.068890 | orchestrator | ok: [testbed-manager] 2026-03-31 02:13:38.068903 | orchestrator | 2026-03-31 02:13:38.068917 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-03-31 02:13:38.068930 | orchestrator | Tuesday 31 March 2026 02:13:32 +0000 (0:00:01.740) 0:00:17.021 ********* 2026-03-31 02:13:38.068945 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:13:38.068954 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:13:38.068962 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:13:38.068970 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:13:38.068977 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:13:38.068985 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:13:38.068993 | orchestrator | changed: [testbed-manager] 2026-03-31 02:13:38.069001 | orchestrator | 2026-03-31 02:13:38.069009 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-03-31 02:13:38.069017 | orchestrator | Tuesday 31 March 2026 02:13:34 +0000 (0:00:01.985) 0:00:19.007 ********* 2026-03-31 02:13:38.069025 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:13:38.069033 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:13:38.069041 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:13:38.069049 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:13:38.069056 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:13:38.069064 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:13:38.069072 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:13:38.069080 | orchestrator | 2026-03-31 02:13:38.069088 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-03-31 02:13:38.069120 | orchestrator | 2026-03-31 02:13:38.069129 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-03-31 02:13:38.069136 | orchestrator | Tuesday 31 March 2026 02:13:35 +0000 (0:00:00.680) 0:00:19.688 ********* 2026-03-31 02:13:38.069144 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:13:38.069152 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:13:38.069160 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:13:38.069167 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:13:38.069175 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:13:38.069183 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:13:38.069190 | orchestrator | ok: [testbed-manager] 2026-03-31 02:13:38.069198 | orchestrator | 2026-03-31 02:13:38.069206 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 02:13:38.069215 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-31 02:13:38.069225 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 02:13:38.069239 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 02:13:38.069254 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 02:13:38.069262 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 02:13:38.069270 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 02:13:38.069278 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 02:13:38.069286 | orchestrator | 2026-03-31 02:13:38.069294 | orchestrator | 2026-03-31 02:13:38.069302 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 02:13:38.069309 | orchestrator | Tuesday 31 March 2026 02:13:38 +0000 (0:00:02.902) 0:00:22.590 ********* 2026-03-31 02:13:38.069317 | orchestrator | =============================================================================== 2026-03-31 02:13:38.069325 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.92s 2026-03-31 02:13:38.069333 | orchestrator | Install python3-docker -------------------------------------------------- 2.90s 2026-03-31 02:13:38.069341 | orchestrator | Apply netplan configuration --------------------------------------------- 2.70s 2026-03-31 02:13:38.069348 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.99s 2026-03-31 02:13:38.069356 | orchestrator | Apply netplan configuration --------------------------------------------- 1.83s 2026-03-31 02:13:38.069364 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.76s 2026-03-31 02:13:38.069372 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.74s 2026-03-31 02:13:38.069379 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.66s 2026-03-31 02:13:38.069387 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.65s 2026-03-31 02:13:38.069395 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.86s 2026-03-31 02:13:38.069403 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.75s 2026-03-31 02:13:38.069417 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.68s 2026-03-31 02:13:38.811507 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-03-31 02:13:50.915602 | orchestrator | 2026-03-31 02:13:50 | INFO  | Task fe7d59a0-be69-4d5a-a54a-35f8ea889d15 (reboot) was prepared for execution. 2026-03-31 02:13:50.915718 | orchestrator | 2026-03-31 02:13:50 | INFO  | It takes a moment until task fe7d59a0-be69-4d5a-a54a-35f8ea889d15 (reboot) has been started and output is visible here. 2026-03-31 02:14:01.666274 | orchestrator | 2026-03-31 02:14:01.666370 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-31 02:14:01.666379 | orchestrator | 2026-03-31 02:14:01.666385 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-31 02:14:01.666391 | orchestrator | Tuesday 31 March 2026 02:13:55 +0000 (0:00:00.213) 0:00:00.213 ********* 2026-03-31 02:14:01.666405 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:14:01.666437 | orchestrator | 2026-03-31 02:14:01.666443 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-31 02:14:01.666448 | orchestrator | Tuesday 31 March 2026 02:13:55 +0000 (0:00:00.108) 0:00:00.321 ********* 2026-03-31 02:14:01.666454 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:14:01.666460 | orchestrator | 2026-03-31 02:14:01.666466 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-31 02:14:01.666488 | orchestrator | Tuesday 31 March 2026 02:13:56 +0000 (0:00:00.920) 0:00:01.241 ********* 2026-03-31 02:14:01.666493 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:14:01.666498 | orchestrator | 2026-03-31 02:14:01.666503 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-31 02:14:01.666508 | orchestrator | 2026-03-31 02:14:01.666513 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-31 02:14:01.666518 | orchestrator | Tuesday 31 March 2026 02:13:56 +0000 (0:00:00.127) 0:00:01.369 ********* 2026-03-31 02:14:01.666523 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:14:01.666528 | orchestrator | 2026-03-31 02:14:01.666533 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-31 02:14:01.666537 | orchestrator | Tuesday 31 March 2026 02:13:56 +0000 (0:00:00.114) 0:00:01.484 ********* 2026-03-31 02:14:01.666542 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:14:01.666547 | orchestrator | 2026-03-31 02:14:01.666552 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-31 02:14:01.666556 | orchestrator | Tuesday 31 March 2026 02:13:57 +0000 (0:00:00.671) 0:00:02.155 ********* 2026-03-31 02:14:01.666561 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:14:01.666566 | orchestrator | 2026-03-31 02:14:01.666571 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-31 02:14:01.666576 | orchestrator | 2026-03-31 02:14:01.666581 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-31 02:14:01.666585 | orchestrator | Tuesday 31 March 2026 02:13:57 +0000 (0:00:00.131) 0:00:02.287 ********* 2026-03-31 02:14:01.666590 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:14:01.666595 | orchestrator | 2026-03-31 02:14:01.666600 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-31 02:14:01.666604 | orchestrator | Tuesday 31 March 2026 02:13:57 +0000 (0:00:00.214) 0:00:02.501 ********* 2026-03-31 02:14:01.666609 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:14:01.666614 | orchestrator | 2026-03-31 02:14:01.666619 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-31 02:14:01.666634 | orchestrator | Tuesday 31 March 2026 02:13:58 +0000 (0:00:00.706) 0:00:03.208 ********* 2026-03-31 02:14:01.666639 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:14:01.666644 | orchestrator | 2026-03-31 02:14:01.666649 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-31 02:14:01.666653 | orchestrator | 2026-03-31 02:14:01.666658 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-31 02:14:01.666663 | orchestrator | Tuesday 31 March 2026 02:13:58 +0000 (0:00:00.131) 0:00:03.340 ********* 2026-03-31 02:14:01.666668 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:14:01.666673 | orchestrator | 2026-03-31 02:14:01.666677 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-31 02:14:01.666682 | orchestrator | Tuesday 31 March 2026 02:13:58 +0000 (0:00:00.122) 0:00:03.462 ********* 2026-03-31 02:14:01.666687 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:14:01.666692 | orchestrator | 2026-03-31 02:14:01.666697 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-31 02:14:01.666701 | orchestrator | Tuesday 31 March 2026 02:13:59 +0000 (0:00:00.707) 0:00:04.170 ********* 2026-03-31 02:14:01.666706 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:14:01.666711 | orchestrator | 2026-03-31 02:14:01.666716 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-31 02:14:01.666721 | orchestrator | 2026-03-31 02:14:01.666725 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-31 02:14:01.666730 | orchestrator | Tuesday 31 March 2026 02:13:59 +0000 (0:00:00.141) 0:00:04.312 ********* 2026-03-31 02:14:01.666737 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:14:01.666745 | orchestrator | 2026-03-31 02:14:01.666752 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-31 02:14:01.666760 | orchestrator | Tuesday 31 March 2026 02:13:59 +0000 (0:00:00.116) 0:00:04.428 ********* 2026-03-31 02:14:01.666820 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:14:01.666827 | orchestrator | 2026-03-31 02:14:01.666832 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-31 02:14:01.666837 | orchestrator | Tuesday 31 March 2026 02:14:00 +0000 (0:00:00.703) 0:00:05.132 ********* 2026-03-31 02:14:01.666842 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:14:01.666847 | orchestrator | 2026-03-31 02:14:01.666852 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-31 02:14:01.666857 | orchestrator | 2026-03-31 02:14:01.666862 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-31 02:14:01.666867 | orchestrator | Tuesday 31 March 2026 02:14:00 +0000 (0:00:00.123) 0:00:05.256 ********* 2026-03-31 02:14:01.666872 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:14:01.666876 | orchestrator | 2026-03-31 02:14:01.666881 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-31 02:14:01.666886 | orchestrator | Tuesday 31 March 2026 02:14:00 +0000 (0:00:00.123) 0:00:05.379 ********* 2026-03-31 02:14:01.666891 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:14:01.666896 | orchestrator | 2026-03-31 02:14:01.666901 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-31 02:14:01.666906 | orchestrator | Tuesday 31 March 2026 02:14:01 +0000 (0:00:00.780) 0:00:06.160 ********* 2026-03-31 02:14:01.666922 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:14:01.666927 | orchestrator | 2026-03-31 02:14:01.666932 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 02:14:01.666938 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 02:14:01.666944 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 02:14:01.666949 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 02:14:01.666954 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 02:14:01.666959 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 02:14:01.666964 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 02:14:01.666969 | orchestrator | 2026-03-31 02:14:01.666974 | orchestrator | 2026-03-31 02:14:01.666978 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 02:14:01.666983 | orchestrator | Tuesday 31 March 2026 02:14:01 +0000 (0:00:00.046) 0:00:06.206 ********* 2026-03-31 02:14:01.666988 | orchestrator | =============================================================================== 2026-03-31 02:14:01.666993 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.49s 2026-03-31 02:14:01.666998 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.80s 2026-03-31 02:14:01.667003 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.70s 2026-03-31 02:14:01.994499 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-03-31 02:14:14.099137 | orchestrator | 2026-03-31 02:14:14 | INFO  | Task 62231c81-a057-477a-8ff1-c9b013074987 (wait-for-connection) was prepared for execution. 2026-03-31 02:14:14.099310 | orchestrator | 2026-03-31 02:14:14 | INFO  | It takes a moment until task 62231c81-a057-477a-8ff1-c9b013074987 (wait-for-connection) has been started and output is visible here. 2026-03-31 02:14:30.511091 | orchestrator | 2026-03-31 02:14:30.511226 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-03-31 02:14:30.511331 | orchestrator | 2026-03-31 02:14:30.511345 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-03-31 02:14:30.511356 | orchestrator | Tuesday 31 March 2026 02:14:18 +0000 (0:00:00.249) 0:00:00.249 ********* 2026-03-31 02:14:30.511366 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:14:30.511377 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:14:30.511387 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:14:30.511397 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:14:30.511406 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:14:30.511416 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:14:30.511425 | orchestrator | 2026-03-31 02:14:30.511435 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 02:14:30.511446 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 02:14:30.511458 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 02:14:30.511468 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 02:14:30.511478 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 02:14:30.511487 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 02:14:30.511497 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 02:14:30.511507 | orchestrator | 2026-03-31 02:14:30.511517 | orchestrator | 2026-03-31 02:14:30.511527 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 02:14:30.511537 | orchestrator | Tuesday 31 March 2026 02:14:30 +0000 (0:00:11.630) 0:00:11.879 ********* 2026-03-31 02:14:30.511546 | orchestrator | =============================================================================== 2026-03-31 02:14:30.511556 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.63s 2026-03-31 02:14:30.883166 | orchestrator | + osism apply hddtemp 2026-03-31 02:14:43.008153 | orchestrator | 2026-03-31 02:14:43 | INFO  | Task 40d03d9c-0821-4f6d-861f-667476d12752 (hddtemp) was prepared for execution. 2026-03-31 02:14:43.008335 | orchestrator | 2026-03-31 02:14:43 | INFO  | It takes a moment until task 40d03d9c-0821-4f6d-861f-667476d12752 (hddtemp) has been started and output is visible here. 2026-03-31 02:15:11.907770 | orchestrator | 2026-03-31 02:15:11.907914 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-03-31 02:15:11.907943 | orchestrator | 2026-03-31 02:15:11.907961 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-03-31 02:15:11.907979 | orchestrator | Tuesday 31 March 2026 02:14:47 +0000 (0:00:00.274) 0:00:00.274 ********* 2026-03-31 02:15:11.907998 | orchestrator | ok: [testbed-manager] 2026-03-31 02:15:11.908018 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:15:11.908036 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:15:11.908053 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:15:11.908072 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:15:11.908089 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:15:11.908108 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:15:11.908127 | orchestrator | 2026-03-31 02:15:11.908146 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-03-31 02:15:11.908165 | orchestrator | Tuesday 31 March 2026 02:14:48 +0000 (0:00:00.746) 0:00:01.020 ********* 2026-03-31 02:15:11.908185 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 02:15:11.908240 | orchestrator | 2026-03-31 02:15:11.908261 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-03-31 02:15:11.908279 | orchestrator | Tuesday 31 March 2026 02:14:49 +0000 (0:00:01.240) 0:00:02.261 ********* 2026-03-31 02:15:11.908298 | orchestrator | ok: [testbed-manager] 2026-03-31 02:15:11.908317 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:15:11.908337 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:15:11.908382 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:15:11.908403 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:15:11.908423 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:15:11.908442 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:15:11.908462 | orchestrator | 2026-03-31 02:15:11.908482 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-03-31 02:15:11.908503 | orchestrator | Tuesday 31 March 2026 02:14:51 +0000 (0:00:01.968) 0:00:04.230 ********* 2026-03-31 02:15:11.908524 | orchestrator | changed: [testbed-manager] 2026-03-31 02:15:11.908544 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:15:11.908564 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:15:11.908583 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:15:11.908602 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:15:11.908621 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:15:11.908633 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:15:11.908644 | orchestrator | 2026-03-31 02:15:11.908655 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-03-31 02:15:11.908666 | orchestrator | Tuesday 31 March 2026 02:14:52 +0000 (0:00:01.293) 0:00:05.523 ********* 2026-03-31 02:15:11.908677 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:15:11.908688 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:15:11.908699 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:15:11.908709 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:15:11.908720 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:15:11.908731 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:15:11.908759 | orchestrator | ok: [testbed-manager] 2026-03-31 02:15:11.908770 | orchestrator | 2026-03-31 02:15:11.908781 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-03-31 02:15:11.908792 | orchestrator | Tuesday 31 March 2026 02:14:54 +0000 (0:00:01.363) 0:00:06.887 ********* 2026-03-31 02:15:11.908826 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:15:11.908846 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:15:11.908864 | orchestrator | changed: [testbed-manager] 2026-03-31 02:15:11.908899 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:15:11.908920 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:15:11.908938 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:15:11.908958 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:15:11.908970 | orchestrator | 2026-03-31 02:15:11.908981 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-03-31 02:15:11.908992 | orchestrator | Tuesday 31 March 2026 02:14:55 +0000 (0:00:00.903) 0:00:07.791 ********* 2026-03-31 02:15:11.909003 | orchestrator | changed: [testbed-manager] 2026-03-31 02:15:11.909014 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:15:11.909025 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:15:11.909035 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:15:11.909048 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:15:11.909067 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:15:11.909078 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:15:11.909089 | orchestrator | 2026-03-31 02:15:11.909100 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-03-31 02:15:11.909111 | orchestrator | Tuesday 31 March 2026 02:15:08 +0000 (0:00:13.342) 0:00:21.133 ********* 2026-03-31 02:15:11.909122 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 02:15:11.909134 | orchestrator | 2026-03-31 02:15:11.909203 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-03-31 02:15:11.909216 | orchestrator | Tuesday 31 March 2026 02:15:09 +0000 (0:00:01.123) 0:00:22.257 ********* 2026-03-31 02:15:11.909227 | orchestrator | changed: [testbed-manager] 2026-03-31 02:15:11.909238 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:15:11.909248 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:15:11.909260 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:15:11.909271 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:15:11.909282 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:15:11.909293 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:15:11.909303 | orchestrator | 2026-03-31 02:15:11.909314 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 02:15:11.909326 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 02:15:11.909432 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-31 02:15:11.909449 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-31 02:15:11.909460 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-31 02:15:11.909471 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-31 02:15:11.909482 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-31 02:15:11.909496 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-31 02:15:11.909514 | orchestrator | 2026-03-31 02:15:11.909544 | orchestrator | 2026-03-31 02:15:11.909563 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 02:15:11.909580 | orchestrator | Tuesday 31 March 2026 02:15:11 +0000 (0:00:01.916) 0:00:24.173 ********* 2026-03-31 02:15:11.909598 | orchestrator | =============================================================================== 2026-03-31 02:15:11.909615 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.34s 2026-03-31 02:15:11.909634 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.97s 2026-03-31 02:15:11.909652 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.92s 2026-03-31 02:15:11.909670 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.36s 2026-03-31 02:15:11.909688 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.29s 2026-03-31 02:15:11.909706 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.24s 2026-03-31 02:15:11.909725 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.12s 2026-03-31 02:15:11.909745 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.90s 2026-03-31 02:15:11.909763 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.75s 2026-03-31 02:15:12.348080 | orchestrator | ++ semver 9.5.0 7.1.1 2026-03-31 02:15:12.407772 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-31 02:15:12.407868 | orchestrator | + sudo systemctl restart manager.service 2026-03-31 02:15:26.444973 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-31 02:15:26.445079 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-31 02:15:26.445092 | orchestrator | + local max_attempts=60 2026-03-31 02:15:26.445122 | orchestrator | + local name=ceph-ansible 2026-03-31 02:15:26.445137 | orchestrator | + local attempt_num=1 2026-03-31 02:15:26.445151 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-31 02:15:26.478664 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-31 02:15:26.478734 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-31 02:15:26.478742 | orchestrator | + sleep 5 2026-03-31 02:15:31.484528 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-31 02:15:31.509102 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-31 02:15:31.509166 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-31 02:15:31.509172 | orchestrator | + sleep 5 2026-03-31 02:15:36.512664 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-31 02:15:36.540716 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-31 02:15:36.540843 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-31 02:15:36.540860 | orchestrator | + sleep 5 2026-03-31 02:15:41.545497 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-31 02:15:41.586913 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-31 02:15:41.587001 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-31 02:15:41.587012 | orchestrator | + sleep 5 2026-03-31 02:15:46.592231 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-31 02:15:46.630504 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-31 02:15:46.630640 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-31 02:15:46.630667 | orchestrator | + sleep 5 2026-03-31 02:15:51.635345 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-31 02:15:51.672162 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-31 02:15:51.672287 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-31 02:15:51.672315 | orchestrator | + sleep 5 2026-03-31 02:15:56.677351 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-31 02:15:56.717108 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-31 02:15:56.717232 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-31 02:15:56.717259 | orchestrator | + sleep 5 2026-03-31 02:16:01.725473 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-31 02:16:01.766832 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-31 02:16:01.766916 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-31 02:16:01.766927 | orchestrator | + sleep 5 2026-03-31 02:16:06.773178 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-31 02:16:06.820130 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-31 02:16:06.820238 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-31 02:16:06.820256 | orchestrator | + sleep 5 2026-03-31 02:16:11.824328 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-31 02:16:11.880033 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-31 02:16:11.880137 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-31 02:16:11.880153 | orchestrator | + sleep 5 2026-03-31 02:16:16.885423 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-31 02:16:16.921722 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-31 02:16:16.921826 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-31 02:16:16.921841 | orchestrator | + sleep 5 2026-03-31 02:16:21.927723 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-31 02:16:21.974348 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-31 02:16:21.974468 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-31 02:16:21.974484 | orchestrator | + sleep 5 2026-03-31 02:16:26.978951 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-31 02:16:27.025211 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-31 02:16:27.025385 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-31 02:16:27.025416 | orchestrator | + sleep 5 2026-03-31 02:16:32.032498 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-31 02:16:32.060399 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-31 02:16:32.060506 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-31 02:16:32.060523 | orchestrator | + local max_attempts=60 2026-03-31 02:16:32.060536 | orchestrator | + local name=kolla-ansible 2026-03-31 02:16:32.060578 | orchestrator | + local attempt_num=1 2026-03-31 02:16:32.061331 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-31 02:16:32.104726 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-31 02:16:32.104812 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-31 02:16:32.104825 | orchestrator | + local max_attempts=60 2026-03-31 02:16:32.104864 | orchestrator | + local name=osism-ansible 2026-03-31 02:16:32.104875 | orchestrator | + local attempt_num=1 2026-03-31 02:16:32.105311 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-31 02:16:32.135585 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-31 02:16:32.135694 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-31 02:16:32.135718 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-31 02:16:32.293433 | orchestrator | ARA in ceph-ansible already disabled. 2026-03-31 02:16:32.465066 | orchestrator | ARA in kolla-ansible already disabled. 2026-03-31 02:16:32.653856 | orchestrator | ARA in osism-ansible already disabled. 2026-03-31 02:16:32.832747 | orchestrator | ARA in osism-kubernetes already disabled. 2026-03-31 02:16:32.833177 | orchestrator | + osism apply gather-facts 2026-03-31 02:16:45.218316 | orchestrator | 2026-03-31 02:16:45 | INFO  | Task b1ab00bd-b302-4a88-948e-6b8ffc615cdd (gather-facts) was prepared for execution. 2026-03-31 02:16:45.218433 | orchestrator | 2026-03-31 02:16:45 | INFO  | It takes a moment until task b1ab00bd-b302-4a88-948e-6b8ffc615cdd (gather-facts) has been started and output is visible here. 2026-03-31 02:17:00.179018 | orchestrator | 2026-03-31 02:17:00.179102 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-31 02:17:00.179111 | orchestrator | 2026-03-31 02:17:00.179117 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-31 02:17:00.179122 | orchestrator | Tuesday 31 March 2026 02:16:49 +0000 (0:00:00.251) 0:00:00.251 ********* 2026-03-31 02:17:00.179127 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:17:00.179134 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:17:00.179139 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:17:00.179143 | orchestrator | ok: [testbed-manager] 2026-03-31 02:17:00.179148 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:17:00.179153 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:17:00.179158 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:17:00.179162 | orchestrator | 2026-03-31 02:17:00.179167 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-31 02:17:00.179172 | orchestrator | 2026-03-31 02:17:00.179177 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-31 02:17:00.179181 | orchestrator | Tuesday 31 March 2026 02:16:59 +0000 (0:00:09.519) 0:00:09.771 ********* 2026-03-31 02:17:00.179186 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:17:00.179192 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:17:00.179197 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:17:00.179201 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:17:00.179206 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:17:00.179211 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:17:00.179216 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:17:00.179220 | orchestrator | 2026-03-31 02:17:00.179225 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 02:17:00.179230 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-31 02:17:00.179236 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-31 02:17:00.179240 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-31 02:17:00.179245 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-31 02:17:00.179250 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-31 02:17:00.179254 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-31 02:17:00.179259 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-31 02:17:00.179312 | orchestrator | 2026-03-31 02:17:00.179318 | orchestrator | 2026-03-31 02:17:00.179323 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 02:17:00.179328 | orchestrator | Tuesday 31 March 2026 02:16:59 +0000 (0:00:00.587) 0:00:10.359 ********* 2026-03-31 02:17:00.179332 | orchestrator | =============================================================================== 2026-03-31 02:17:00.179337 | orchestrator | Gathers facts about hosts ----------------------------------------------- 9.52s 2026-03-31 02:17:00.179341 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.59s 2026-03-31 02:17:00.568714 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-03-31 02:17:00.587511 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-03-31 02:17:00.599888 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-03-31 02:17:00.610842 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-03-31 02:17:00.622637 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-03-31 02:17:00.641118 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-03-31 02:17:00.655576 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-03-31 02:17:00.677692 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-03-31 02:17:00.697649 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-03-31 02:17:00.713137 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-03-31 02:17:00.728088 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-03-31 02:17:00.745974 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-03-31 02:17:00.759938 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-03-31 02:17:00.780120 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-03-31 02:17:00.794903 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-03-31 02:17:00.810475 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-03-31 02:17:00.822428 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-03-31 02:17:00.833900 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-03-31 02:17:00.845186 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-03-31 02:17:00.863841 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-03-31 02:17:00.884898 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-03-31 02:17:00.901516 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-03-31 02:17:00.920129 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-03-31 02:17:00.940451 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-31 02:17:01.049963 | orchestrator | ok: Runtime: 0:25:04.871854 2026-03-31 02:17:01.152941 | 2026-03-31 02:17:01.153183 | TASK [Deploy services] 2026-03-31 02:17:01.848025 | orchestrator | 2026-03-31 02:17:01.848167 | orchestrator | # DEPLOY SERVICES 2026-03-31 02:17:01.848182 | orchestrator | 2026-03-31 02:17:01.848190 | orchestrator | + set -e 2026-03-31 02:17:01.848197 | orchestrator | + echo 2026-03-31 02:17:01.848204 | orchestrator | + echo '# DEPLOY SERVICES' 2026-03-31 02:17:01.848213 | orchestrator | + echo 2026-03-31 02:17:01.848238 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-31 02:17:01.848249 | orchestrator | ++ export INTERACTIVE=false 2026-03-31 02:17:01.848257 | orchestrator | ++ INTERACTIVE=false 2026-03-31 02:17:01.848264 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-31 02:17:01.848275 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-31 02:17:01.848281 | orchestrator | + source /opt/manager-vars.sh 2026-03-31 02:17:01.848289 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-31 02:17:01.848295 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-31 02:17:01.848305 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-31 02:17:01.848310 | orchestrator | ++ CEPH_VERSION=reef 2026-03-31 02:17:01.848318 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-31 02:17:01.848324 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-31 02:17:01.848333 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-31 02:17:01.848338 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-31 02:17:01.848344 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-31 02:17:01.848350 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-31 02:17:01.848355 | orchestrator | ++ export ARA=false 2026-03-31 02:17:01.848361 | orchestrator | ++ ARA=false 2026-03-31 02:17:01.848366 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-31 02:17:01.848372 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-31 02:17:01.848377 | orchestrator | ++ export TEMPEST=false 2026-03-31 02:17:01.848383 | orchestrator | ++ TEMPEST=false 2026-03-31 02:17:01.848388 | orchestrator | ++ export IS_ZUUL=true 2026-03-31 02:17:01.848393 | orchestrator | ++ IS_ZUUL=true 2026-03-31 02:17:01.848399 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.240 2026-03-31 02:17:01.848405 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.240 2026-03-31 02:17:01.848410 | orchestrator | ++ export EXTERNAL_API=false 2026-03-31 02:17:01.848415 | orchestrator | ++ EXTERNAL_API=false 2026-03-31 02:17:01.848421 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-31 02:17:01.848426 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-31 02:17:01.848431 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-31 02:17:01.848437 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-31 02:17:01.848442 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-31 02:17:01.848452 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-31 02:17:01.848458 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-03-31 02:17:01.856074 | orchestrator | + set -e 2026-03-31 02:17:01.856159 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-31 02:17:01.856170 | orchestrator | ++ export INTERACTIVE=false 2026-03-31 02:17:01.856177 | orchestrator | ++ INTERACTIVE=false 2026-03-31 02:17:01.856185 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-31 02:17:01.856196 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-31 02:17:01.856204 | orchestrator | + source /opt/manager-vars.sh 2026-03-31 02:17:01.856210 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-31 02:17:01.856216 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-31 02:17:01.856221 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-31 02:17:01.856227 | orchestrator | ++ CEPH_VERSION=reef 2026-03-31 02:17:01.856234 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-31 02:17:01.856240 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-31 02:17:01.856246 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-31 02:17:01.856254 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-31 02:17:01.856264 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-31 02:17:01.856273 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-31 02:17:01.856283 | orchestrator | ++ export ARA=false 2026-03-31 02:17:01.856292 | orchestrator | ++ ARA=false 2026-03-31 02:17:01.856301 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-31 02:17:01.856310 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-31 02:17:01.856319 | orchestrator | ++ export TEMPEST=false 2026-03-31 02:17:01.856332 | orchestrator | ++ TEMPEST=false 2026-03-31 02:17:01.856342 | orchestrator | ++ export IS_ZUUL=true 2026-03-31 02:17:01.856352 | orchestrator | ++ IS_ZUUL=true 2026-03-31 02:17:01.856361 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.240 2026-03-31 02:17:01.856371 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.240 2026-03-31 02:17:01.856381 | orchestrator | ++ export EXTERNAL_API=false 2026-03-31 02:17:01.856390 | orchestrator | ++ EXTERNAL_API=false 2026-03-31 02:17:01.856400 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-31 02:17:01.856410 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-31 02:17:01.856416 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-31 02:17:01.856422 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-31 02:17:01.856450 | orchestrator | 2026-03-31 02:17:01.856456 | orchestrator | # PULL IMAGES 2026-03-31 02:17:01.856462 | orchestrator | 2026-03-31 02:17:01.856468 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-31 02:17:01.856474 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-31 02:17:01.856480 | orchestrator | + echo 2026-03-31 02:17:01.856486 | orchestrator | + echo '# PULL IMAGES' 2026-03-31 02:17:01.856492 | orchestrator | + echo 2026-03-31 02:17:01.857010 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-31 02:17:01.916127 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-31 02:17:01.916215 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-03-31 02:17:03.958301 | orchestrator | 2026-03-31 02:17:03 | INFO  | Trying to run play pull-images in environment custom 2026-03-31 02:17:14.229600 | orchestrator | 2026-03-31 02:17:14 | INFO  | Task 89d4bc0d-2d8f-4d8a-a0d3-fc85c31f2191 (pull-images) was prepared for execution. 2026-03-31 02:17:14.229913 | orchestrator | 2026-03-31 02:17:14 | INFO  | Task 89d4bc0d-2d8f-4d8a-a0d3-fc85c31f2191 is running in background. No more output. Check ARA for logs. 2026-03-31 02:17:14.580052 | orchestrator | + sh -c /opt/configuration/scripts/deploy/001-helpers.sh 2026-03-31 02:17:26.792598 | orchestrator | 2026-03-31 02:17:26 | INFO  | Task 78d1ab51-95d6-4fef-99fe-6e084900b91b (cgit) was prepared for execution. 2026-03-31 02:17:26.792791 | orchestrator | 2026-03-31 02:17:26 | INFO  | Task 78d1ab51-95d6-4fef-99fe-6e084900b91b is running in background. No more output. Check ARA for logs. 2026-03-31 02:17:39.781147 | orchestrator | 2026-03-31 02:17:39 | INFO  | Task 7c443dac-4f94-4141-9431-b2ba46c30c88 (dotfiles) was prepared for execution. 2026-03-31 02:17:39.781277 | orchestrator | 2026-03-31 02:17:39 | INFO  | Task 7c443dac-4f94-4141-9431-b2ba46c30c88 is running in background. No more output. Check ARA for logs. 2026-03-31 02:17:53.170290 | orchestrator | 2026-03-31 02:17:53 | INFO  | Task b0d3e77a-9d35-497d-8c73-cba6ad448f8d (homer) was prepared for execution. 2026-03-31 02:17:53.170393 | orchestrator | 2026-03-31 02:17:53 | INFO  | Task b0d3e77a-9d35-497d-8c73-cba6ad448f8d is running in background. No more output. Check ARA for logs. 2026-03-31 02:18:05.754674 | orchestrator | 2026-03-31 02:18:05 | INFO  | Task 3d9c81a0-45de-419a-a539-ec3ac40050a4 (phpmyadmin) was prepared for execution. 2026-03-31 02:18:05.754879 | orchestrator | 2026-03-31 02:18:05 | INFO  | Task 3d9c81a0-45de-419a-a539-ec3ac40050a4 is running in background. No more output. Check ARA for logs. 2026-03-31 02:18:18.652099 | orchestrator | 2026-03-31 02:18:18 | INFO  | Task 9b95bed7-bb11-46e1-b759-2af20e020ec7 (sosreport) was prepared for execution. 2026-03-31 02:18:18.652192 | orchestrator | 2026-03-31 02:18:18 | INFO  | Task 9b95bed7-bb11-46e1-b759-2af20e020ec7 is running in background. No more output. Check ARA for logs. 2026-03-31 02:18:19.014581 | orchestrator | + sh -c /opt/configuration/scripts/deploy/500-kubernetes.sh 2026-03-31 02:18:19.024678 | orchestrator | + set -e 2026-03-31 02:18:19.024742 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-31 02:18:19.024751 | orchestrator | ++ export INTERACTIVE=false 2026-03-31 02:18:19.024757 | orchestrator | ++ INTERACTIVE=false 2026-03-31 02:18:19.024765 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-31 02:18:19.024771 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-31 02:18:19.024807 | orchestrator | + source /opt/manager-vars.sh 2026-03-31 02:18:19.024814 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-31 02:18:19.024820 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-31 02:18:19.024825 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-31 02:18:19.024831 | orchestrator | ++ CEPH_VERSION=reef 2026-03-31 02:18:19.024837 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-31 02:18:19.024842 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-31 02:18:19.024848 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-31 02:18:19.024854 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-31 02:18:19.024860 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-31 02:18:19.024865 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-31 02:18:19.024871 | orchestrator | ++ export ARA=false 2026-03-31 02:18:19.024877 | orchestrator | ++ ARA=false 2026-03-31 02:18:19.024882 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-31 02:18:19.024910 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-31 02:18:19.024916 | orchestrator | ++ export TEMPEST=false 2026-03-31 02:18:19.024921 | orchestrator | ++ TEMPEST=false 2026-03-31 02:18:19.024927 | orchestrator | ++ export IS_ZUUL=true 2026-03-31 02:18:19.024932 | orchestrator | ++ IS_ZUUL=true 2026-03-31 02:18:19.024949 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.240 2026-03-31 02:18:19.024959 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.240 2026-03-31 02:18:19.024964 | orchestrator | ++ export EXTERNAL_API=false 2026-03-31 02:18:19.024970 | orchestrator | ++ EXTERNAL_API=false 2026-03-31 02:18:19.024975 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-31 02:18:19.024981 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-31 02:18:19.024986 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-31 02:18:19.024991 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-31 02:18:19.024997 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-31 02:18:19.025003 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-31 02:18:19.025974 | orchestrator | ++ semver 9.5.0 8.0.3 2026-03-31 02:18:19.084578 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-31 02:18:19.084660 | orchestrator | + osism apply frr 2026-03-31 02:18:31.345356 | orchestrator | 2026-03-31 02:18:31 | INFO  | Task 6855577f-7489-4c5c-a1d5-e588273b1581 (frr) was prepared for execution. 2026-03-31 02:18:31.345490 | orchestrator | 2026-03-31 02:18:31 | INFO  | It takes a moment until task 6855577f-7489-4c5c-a1d5-e588273b1581 (frr) has been started and output is visible here. 2026-03-31 02:19:11.811010 | orchestrator | 2026-03-31 02:19:11.811163 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-03-31 02:19:11.811195 | orchestrator | 2026-03-31 02:19:11.811215 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-03-31 02:19:11.811243 | orchestrator | Tuesday 31 March 2026 02:18:39 +0000 (0:00:00.308) 0:00:00.308 ********* 2026-03-31 02:19:11.811262 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-03-31 02:19:11.811275 | orchestrator | 2026-03-31 02:19:11.811287 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-03-31 02:19:11.811298 | orchestrator | Tuesday 31 March 2026 02:18:41 +0000 (0:00:01.617) 0:00:01.926 ********* 2026-03-31 02:19:11.811309 | orchestrator | changed: [testbed-manager] 2026-03-31 02:19:11.811321 | orchestrator | 2026-03-31 02:19:11.811332 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-03-31 02:19:11.811346 | orchestrator | Tuesday 31 March 2026 02:18:44 +0000 (0:00:03.594) 0:00:05.521 ********* 2026-03-31 02:19:11.811357 | orchestrator | changed: [testbed-manager] 2026-03-31 02:19:11.811368 | orchestrator | 2026-03-31 02:19:11.811379 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-03-31 02:19:11.811390 | orchestrator | Tuesday 31 March 2026 02:19:00 +0000 (0:00:15.722) 0:00:21.243 ********* 2026-03-31 02:19:11.811400 | orchestrator | ok: [testbed-manager] 2026-03-31 02:19:11.811412 | orchestrator | 2026-03-31 02:19:11.811423 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-03-31 02:19:11.811434 | orchestrator | Tuesday 31 March 2026 02:19:01 +0000 (0:00:01.054) 0:00:22.298 ********* 2026-03-31 02:19:11.811445 | orchestrator | changed: [testbed-manager] 2026-03-31 02:19:11.811456 | orchestrator | 2026-03-31 02:19:11.811467 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-03-31 02:19:11.811478 | orchestrator | Tuesday 31 March 2026 02:19:02 +0000 (0:00:00.970) 0:00:23.268 ********* 2026-03-31 02:19:11.811489 | orchestrator | ok: [testbed-manager] 2026-03-31 02:19:11.811499 | orchestrator | 2026-03-31 02:19:11.811510 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-03-31 02:19:11.811522 | orchestrator | Tuesday 31 March 2026 02:19:04 +0000 (0:00:01.441) 0:00:24.710 ********* 2026-03-31 02:19:11.811533 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:19:11.811544 | orchestrator | 2026-03-31 02:19:11.811555 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-03-31 02:19:11.811566 | orchestrator | Tuesday 31 March 2026 02:19:04 +0000 (0:00:00.178) 0:00:24.888 ********* 2026-03-31 02:19:11.811604 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:19:11.811617 | orchestrator | 2026-03-31 02:19:11.811628 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-03-31 02:19:11.811639 | orchestrator | Tuesday 31 March 2026 02:19:04 +0000 (0:00:00.227) 0:00:25.116 ********* 2026-03-31 02:19:11.811650 | orchestrator | changed: [testbed-manager] 2026-03-31 02:19:11.811660 | orchestrator | 2026-03-31 02:19:11.811671 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-03-31 02:19:11.811682 | orchestrator | Tuesday 31 March 2026 02:19:05 +0000 (0:00:01.051) 0:00:26.167 ********* 2026-03-31 02:19:11.811693 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-03-31 02:19:11.811703 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-03-31 02:19:11.811716 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-03-31 02:19:11.811726 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-03-31 02:19:11.811737 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-03-31 02:19:11.811748 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-03-31 02:19:11.811759 | orchestrator | 2026-03-31 02:19:11.811770 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-03-31 02:19:11.811780 | orchestrator | Tuesday 31 March 2026 02:19:08 +0000 (0:00:02.447) 0:00:28.614 ********* 2026-03-31 02:19:11.811791 | orchestrator | ok: [testbed-manager] 2026-03-31 02:19:11.811802 | orchestrator | 2026-03-31 02:19:11.811813 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-03-31 02:19:11.811823 | orchestrator | Tuesday 31 March 2026 02:19:09 +0000 (0:00:01.852) 0:00:30.467 ********* 2026-03-31 02:19:11.811834 | orchestrator | changed: [testbed-manager] 2026-03-31 02:19:11.811845 | orchestrator | 2026-03-31 02:19:11.811855 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 02:19:11.811866 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 02:19:11.811903 | orchestrator | 2026-03-31 02:19:11.811924 | orchestrator | 2026-03-31 02:19:11.811942 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 02:19:11.811953 | orchestrator | Tuesday 31 March 2026 02:19:11 +0000 (0:00:01.515) 0:00:31.982 ********* 2026-03-31 02:19:11.811964 | orchestrator | =============================================================================== 2026-03-31 02:19:11.811974 | orchestrator | osism.services.frr : Install frr package ------------------------------- 15.72s 2026-03-31 02:19:11.811985 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 3.59s 2026-03-31 02:19:11.811996 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.45s 2026-03-31 02:19:11.812007 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.85s 2026-03-31 02:19:11.812018 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 1.62s 2026-03-31 02:19:11.812050 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.52s 2026-03-31 02:19:11.812061 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.44s 2026-03-31 02:19:11.812072 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.05s 2026-03-31 02:19:11.812083 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.05s 2026-03-31 02:19:11.812093 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.97s 2026-03-31 02:19:11.812104 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.23s 2026-03-31 02:19:11.812115 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.18s 2026-03-31 02:19:12.256509 | orchestrator | + osism apply kubernetes 2026-03-31 02:19:14.519608 | orchestrator | 2026-03-31 02:19:14 | INFO  | Task 566f6865-8006-402c-9c76-1e71fe458d19 (kubernetes) was prepared for execution. 2026-03-31 02:19:14.519711 | orchestrator | 2026-03-31 02:19:14 | INFO  | It takes a moment until task 566f6865-8006-402c-9c76-1e71fe458d19 (kubernetes) has been started and output is visible here. 2026-03-31 02:19:41.560526 | orchestrator | 2026-03-31 02:19:41.560657 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-03-31 02:19:41.560674 | orchestrator | 2026-03-31 02:19:41.560683 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-03-31 02:19:41.560692 | orchestrator | Tuesday 31 March 2026 02:19:20 +0000 (0:00:00.276) 0:00:00.276 ********* 2026-03-31 02:19:41.560699 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:19:41.560708 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:19:41.560715 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:19:41.560723 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:19:41.560730 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:19:41.560738 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:19:41.560745 | orchestrator | 2026-03-31 02:19:41.560752 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-03-31 02:19:41.560760 | orchestrator | Tuesday 31 March 2026 02:19:21 +0000 (0:00:00.871) 0:00:01.148 ********* 2026-03-31 02:19:41.560767 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:19:41.560775 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:19:41.560782 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:19:41.560789 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:19:41.560796 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:19:41.560803 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:19:41.560810 | orchestrator | 2026-03-31 02:19:41.560818 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-03-31 02:19:41.560827 | orchestrator | Tuesday 31 March 2026 02:19:21 +0000 (0:00:00.656) 0:00:01.804 ********* 2026-03-31 02:19:41.560835 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:19:41.560842 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:19:41.560849 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:19:41.560856 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:19:41.560865 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:19:41.560877 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:19:41.560887 | orchestrator | 2026-03-31 02:19:41.560899 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-03-31 02:19:41.560910 | orchestrator | Tuesday 31 March 2026 02:19:22 +0000 (0:00:00.870) 0:00:02.675 ********* 2026-03-31 02:19:41.560921 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:19:41.560985 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:19:41.561000 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:19:41.561016 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:19:41.561028 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:19:41.561041 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:19:41.561054 | orchestrator | 2026-03-31 02:19:41.561067 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-03-31 02:19:41.561080 | orchestrator | Tuesday 31 March 2026 02:19:24 +0000 (0:00:01.997) 0:00:04.672 ********* 2026-03-31 02:19:41.561088 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:19:41.561097 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:19:41.561105 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:19:41.561113 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:19:41.561121 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:19:41.561130 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:19:41.561138 | orchestrator | 2026-03-31 02:19:41.561146 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-03-31 02:19:41.561154 | orchestrator | Tuesday 31 March 2026 02:19:26 +0000 (0:00:01.695) 0:00:06.368 ********* 2026-03-31 02:19:41.561162 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:19:41.561193 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:19:41.561201 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:19:41.561209 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:19:41.561218 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:19:41.561226 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:19:41.561234 | orchestrator | 2026-03-31 02:19:41.561251 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-03-31 02:19:41.561260 | orchestrator | Tuesday 31 March 2026 02:19:27 +0000 (0:00:01.179) 0:00:07.547 ********* 2026-03-31 02:19:41.561268 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:19:41.561276 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:19:41.561284 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:19:41.561292 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:19:41.561300 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:19:41.561309 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:19:41.561317 | orchestrator | 2026-03-31 02:19:41.561326 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-03-31 02:19:41.561334 | orchestrator | Tuesday 31 March 2026 02:19:28 +0000 (0:00:00.668) 0:00:08.216 ********* 2026-03-31 02:19:41.561342 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:19:41.561350 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:19:41.561357 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:19:41.561365 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:19:41.561373 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:19:41.561381 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:19:41.561389 | orchestrator | 2026-03-31 02:19:41.561397 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-03-31 02:19:41.561405 | orchestrator | Tuesday 31 March 2026 02:19:29 +0000 (0:00:01.058) 0:00:09.274 ********* 2026-03-31 02:19:41.561413 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-31 02:19:41.561422 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-31 02:19:41.561429 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:19:41.561436 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-31 02:19:41.561443 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-31 02:19:41.561450 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:19:41.561457 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-31 02:19:41.561464 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-31 02:19:41.561477 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-31 02:19:41.561489 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-31 02:19:41.561530 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:19:41.561544 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-31 02:19:41.561555 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-31 02:19:41.561566 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:19:41.561576 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:19:41.561587 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-31 02:19:41.561598 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-31 02:19:41.561610 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:19:41.561621 | orchestrator | 2026-03-31 02:19:41.561632 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-03-31 02:19:41.561645 | orchestrator | Tuesday 31 March 2026 02:19:30 +0000 (0:00:00.813) 0:00:10.088 ********* 2026-03-31 02:19:41.561656 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:19:41.561667 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:19:41.561679 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:19:41.561702 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:19:41.561715 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:19:41.561727 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:19:41.561739 | orchestrator | 2026-03-31 02:19:41.561751 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-03-31 02:19:41.561759 | orchestrator | Tuesday 31 March 2026 02:19:31 +0000 (0:00:01.277) 0:00:11.365 ********* 2026-03-31 02:19:41.561767 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:19:41.561774 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:19:41.561781 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:19:41.561788 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:19:41.561796 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:19:41.561803 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:19:41.561810 | orchestrator | 2026-03-31 02:19:41.561817 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-03-31 02:19:41.561824 | orchestrator | Tuesday 31 March 2026 02:19:32 +0000 (0:00:00.828) 0:00:12.194 ********* 2026-03-31 02:19:41.561831 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:19:41.561838 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:19:41.561846 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:19:41.561853 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:19:41.561860 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:19:41.561867 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:19:41.561873 | orchestrator | 2026-03-31 02:19:41.561881 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-03-31 02:19:41.561888 | orchestrator | Tuesday 31 March 2026 02:19:37 +0000 (0:00:05.441) 0:00:17.635 ********* 2026-03-31 02:19:41.561895 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:19:41.561909 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:19:41.561916 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:19:41.561924 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:19:41.561931 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:19:41.561962 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:19:41.561970 | orchestrator | 2026-03-31 02:19:41.561977 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-03-31 02:19:41.561984 | orchestrator | Tuesday 31 March 2026 02:19:38 +0000 (0:00:00.930) 0:00:18.566 ********* 2026-03-31 02:19:41.561991 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:19:41.561998 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:19:41.562005 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:19:41.562080 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:19:41.562091 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:19:41.562099 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:19:41.562106 | orchestrator | 2026-03-31 02:19:41.562113 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-03-31 02:19:41.562122 | orchestrator | Tuesday 31 March 2026 02:19:39 +0000 (0:00:01.331) 0:00:19.898 ********* 2026-03-31 02:19:41.562129 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:19:41.562136 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:19:41.562143 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:19:41.562150 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:19:41.562157 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:19:41.562164 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:19:41.562171 | orchestrator | 2026-03-31 02:19:41.562178 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-03-31 02:19:41.562185 | orchestrator | Tuesday 31 March 2026 02:19:40 +0000 (0:00:00.689) 0:00:20.588 ********* 2026-03-31 02:19:41.562192 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-03-31 02:19:41.562204 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-03-31 02:19:41.562212 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:19:41.562219 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-03-31 02:19:41.562237 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-03-31 02:19:41.562250 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:19:41.562262 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-03-31 02:19:41.562274 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-03-31 02:19:41.562286 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:19:41.562298 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-03-31 02:19:41.562309 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-03-31 02:19:41.562322 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:19:41.562335 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-03-31 02:19:41.562346 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-03-31 02:19:41.562359 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:19:41.562372 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-03-31 02:19:41.562384 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-03-31 02:19:41.562399 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:19:41.562412 | orchestrator | 2026-03-31 02:19:41.562425 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-03-31 02:19:41.562454 | orchestrator | Tuesday 31 March 2026 02:19:41 +0000 (0:00:01.013) 0:00:21.602 ********* 2026-03-31 02:20:58.765688 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:20:58.765822 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:20:58.765836 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:20:58.765845 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:20:58.765854 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:20:58.765862 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:20:58.765871 | orchestrator | 2026-03-31 02:20:58.765880 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-03-31 02:20:58.765889 | orchestrator | Tuesday 31 March 2026 02:19:42 +0000 (0:00:00.688) 0:00:22.290 ********* 2026-03-31 02:20:58.765911 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:20:58.765919 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:20:58.765936 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:20:58.765944 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:20:58.765952 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:20:58.765960 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:20:58.765968 | orchestrator | 2026-03-31 02:20:58.765975 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-03-31 02:20:58.765983 | orchestrator | 2026-03-31 02:20:58.765991 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-03-31 02:20:58.766000 | orchestrator | Tuesday 31 March 2026 02:19:43 +0000 (0:00:01.437) 0:00:23.727 ********* 2026-03-31 02:20:58.766008 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:20:58.766115 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:20:58.766126 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:20:58.766134 | orchestrator | 2026-03-31 02:20:58.766143 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-03-31 02:20:58.766151 | orchestrator | Tuesday 31 March 2026 02:19:45 +0000 (0:00:01.430) 0:00:25.158 ********* 2026-03-31 02:20:58.766159 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:20:58.766167 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:20:58.766174 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:20:58.766183 | orchestrator | 2026-03-31 02:20:58.766192 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-03-31 02:20:58.766202 | orchestrator | Tuesday 31 March 2026 02:19:46 +0000 (0:00:01.879) 0:00:27.037 ********* 2026-03-31 02:20:58.766212 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:20:58.766220 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:20:58.766229 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:20:58.766239 | orchestrator | 2026-03-31 02:20:58.766248 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-03-31 02:20:58.766278 | orchestrator | Tuesday 31 March 2026 02:19:47 +0000 (0:00:00.889) 0:00:27.927 ********* 2026-03-31 02:20:58.766288 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:20:58.766296 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:20:58.766305 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:20:58.766314 | orchestrator | 2026-03-31 02:20:58.766323 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-03-31 02:20:58.766332 | orchestrator | Tuesday 31 March 2026 02:19:48 +0000 (0:00:00.899) 0:00:28.826 ********* 2026-03-31 02:20:58.766341 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:20:58.766351 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:20:58.766360 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:20:58.766369 | orchestrator | 2026-03-31 02:20:58.766378 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-03-31 02:20:58.766402 | orchestrator | Tuesday 31 March 2026 02:19:49 +0000 (0:00:00.406) 0:00:29.233 ********* 2026-03-31 02:20:58.766417 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:20:58.766430 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:20:58.766443 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:20:58.766456 | orchestrator | 2026-03-31 02:20:58.766470 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-03-31 02:20:58.766482 | orchestrator | Tuesday 31 March 2026 02:19:50 +0000 (0:00:01.019) 0:00:30.252 ********* 2026-03-31 02:20:58.766495 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:20:58.766508 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:20:58.766521 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:20:58.766535 | orchestrator | 2026-03-31 02:20:58.766548 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-03-31 02:20:58.766561 | orchestrator | Tuesday 31 March 2026 02:19:52 +0000 (0:00:01.819) 0:00:32.071 ********* 2026-03-31 02:20:58.766576 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-1, testbed-node-0, testbed-node-2 2026-03-31 02:20:58.766589 | orchestrator | 2026-03-31 02:20:58.766602 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-03-31 02:20:58.766613 | orchestrator | Tuesday 31 March 2026 02:19:52 +0000 (0:00:00.609) 0:00:32.681 ********* 2026-03-31 02:20:58.766626 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:20:58.766638 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:20:58.766651 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:20:58.766664 | orchestrator | 2026-03-31 02:20:58.766678 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-03-31 02:20:58.766691 | orchestrator | Tuesday 31 March 2026 02:19:54 +0000 (0:00:02.022) 0:00:34.704 ********* 2026-03-31 02:20:58.766705 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:20:58.766719 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:20:58.766732 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:20:58.766746 | orchestrator | 2026-03-31 02:20:58.766759 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-03-31 02:20:58.766772 | orchestrator | Tuesday 31 March 2026 02:19:55 +0000 (0:00:00.577) 0:00:35.281 ********* 2026-03-31 02:20:58.766786 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:20:58.766799 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:20:58.766812 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:20:58.766826 | orchestrator | 2026-03-31 02:20:58.766840 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-03-31 02:20:58.766852 | orchestrator | Tuesday 31 March 2026 02:19:56 +0000 (0:00:00.803) 0:00:36.085 ********* 2026-03-31 02:20:58.766864 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:20:58.766872 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:20:58.766880 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:20:58.766888 | orchestrator | 2026-03-31 02:20:58.766897 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-03-31 02:20:58.766934 | orchestrator | Tuesday 31 March 2026 02:19:57 +0000 (0:00:01.364) 0:00:37.450 ********* 2026-03-31 02:20:58.766948 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:20:58.766978 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:20:58.766992 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:20:58.767005 | orchestrator | 2026-03-31 02:20:58.767017 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-03-31 02:20:58.767026 | orchestrator | Tuesday 31 March 2026 02:19:58 +0000 (0:00:00.757) 0:00:38.207 ********* 2026-03-31 02:20:58.767033 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:20:58.767041 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:20:58.767049 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:20:58.767057 | orchestrator | 2026-03-31 02:20:58.767084 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-03-31 02:20:58.767094 | orchestrator | Tuesday 31 March 2026 02:19:58 +0000 (0:00:00.472) 0:00:38.679 ********* 2026-03-31 02:20:58.767102 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:20:58.767110 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:20:58.767118 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:20:58.767126 | orchestrator | 2026-03-31 02:20:58.767141 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-03-31 02:20:58.767149 | orchestrator | Tuesday 31 March 2026 02:19:59 +0000 (0:00:01.345) 0:00:40.024 ********* 2026-03-31 02:20:58.767156 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:20:58.767164 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:20:58.767172 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:20:58.767180 | orchestrator | 2026-03-31 02:20:58.767188 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-03-31 02:20:58.767195 | orchestrator | Tuesday 31 March 2026 02:20:03 +0000 (0:00:03.137) 0:00:43.162 ********* 2026-03-31 02:20:58.767203 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:20:58.767211 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:20:58.767219 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:20:58.767230 | orchestrator | 2026-03-31 02:20:58.767238 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-03-31 02:20:58.767246 | orchestrator | Tuesday 31 March 2026 02:20:03 +0000 (0:00:00.337) 0:00:43.499 ********* 2026-03-31 02:20:58.767254 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-31 02:20:58.767264 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-31 02:20:58.767272 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-31 02:20:58.767280 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-31 02:20:58.767288 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-31 02:20:58.767296 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-31 02:20:58.767303 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-31 02:20:58.767311 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-31 02:20:58.767319 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-31 02:20:58.767327 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-31 02:20:58.767335 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-31 02:20:58.767351 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-31 02:20:58.767359 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-03-31 02:20:58.767367 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-03-31 02:20:58.767374 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-03-31 02:20:58.767382 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:20:58.767390 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:20:58.767398 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:20:58.767406 | orchestrator | 2026-03-31 02:20:58.767418 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-03-31 02:20:58.767426 | orchestrator | Tuesday 31 March 2026 02:20:57 +0000 (0:00:53.972) 0:01:37.471 ********* 2026-03-31 02:20:58.767434 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:20:58.767442 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:20:58.767450 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:20:58.767457 | orchestrator | 2026-03-31 02:20:58.767465 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-03-31 02:20:58.767473 | orchestrator | Tuesday 31 March 2026 02:20:57 +0000 (0:00:00.316) 0:01:37.788 ********* 2026-03-31 02:20:58.767486 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:21:39.502646 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:21:39.502805 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:21:39.502828 | orchestrator | 2026-03-31 02:21:39.502844 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-03-31 02:21:39.502860 | orchestrator | Tuesday 31 March 2026 02:20:58 +0000 (0:00:01.021) 0:01:38.809 ********* 2026-03-31 02:21:39.502875 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:21:39.502889 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:21:39.502904 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:21:39.502918 | orchestrator | 2026-03-31 02:21:39.502933 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-03-31 02:21:39.502948 | orchestrator | Tuesday 31 March 2026 02:20:59 +0000 (0:00:01.231) 0:01:40.041 ********* 2026-03-31 02:21:39.502961 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:21:39.502976 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:21:39.502989 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:21:39.503003 | orchestrator | 2026-03-31 02:21:39.503017 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-03-31 02:21:39.503033 | orchestrator | Tuesday 31 March 2026 02:21:24 +0000 (0:00:24.091) 0:02:04.132 ********* 2026-03-31 02:21:39.503048 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:21:39.503063 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:21:39.503078 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:21:39.503094 | orchestrator | 2026-03-31 02:21:39.503109 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-03-31 02:21:39.503124 | orchestrator | Tuesday 31 March 2026 02:21:24 +0000 (0:00:00.656) 0:02:04.789 ********* 2026-03-31 02:21:39.503166 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:21:39.503182 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:21:39.503197 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:21:39.503213 | orchestrator | 2026-03-31 02:21:39.503228 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-03-31 02:21:39.503243 | orchestrator | Tuesday 31 March 2026 02:21:25 +0000 (0:00:00.653) 0:02:05.442 ********* 2026-03-31 02:21:39.503258 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:21:39.503273 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:21:39.503288 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:21:39.503303 | orchestrator | 2026-03-31 02:21:39.503318 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-03-31 02:21:39.503367 | orchestrator | Tuesday 31 March 2026 02:21:26 +0000 (0:00:00.628) 0:02:06.071 ********* 2026-03-31 02:21:39.503385 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:21:39.503401 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:21:39.503416 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:21:39.503431 | orchestrator | 2026-03-31 02:21:39.503443 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-03-31 02:21:39.503452 | orchestrator | Tuesday 31 March 2026 02:21:26 +0000 (0:00:00.937) 0:02:07.009 ********* 2026-03-31 02:21:39.503461 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:21:39.503469 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:21:39.503478 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:21:39.503486 | orchestrator | 2026-03-31 02:21:39.503495 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-03-31 02:21:39.503504 | orchestrator | Tuesday 31 March 2026 02:21:27 +0000 (0:00:00.318) 0:02:07.328 ********* 2026-03-31 02:21:39.503512 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:21:39.503521 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:21:39.503530 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:21:39.503538 | orchestrator | 2026-03-31 02:21:39.503547 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-03-31 02:21:39.503556 | orchestrator | Tuesday 31 March 2026 02:21:27 +0000 (0:00:00.682) 0:02:08.010 ********* 2026-03-31 02:21:39.503569 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:21:39.503583 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:21:39.503598 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:21:39.503612 | orchestrator | 2026-03-31 02:21:39.503626 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-03-31 02:21:39.503640 | orchestrator | Tuesday 31 March 2026 02:21:28 +0000 (0:00:00.625) 0:02:08.635 ********* 2026-03-31 02:21:39.503655 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:21:39.503669 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:21:39.503685 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:21:39.503699 | orchestrator | 2026-03-31 02:21:39.503715 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-03-31 02:21:39.503726 | orchestrator | Tuesday 31 March 2026 02:21:29 +0000 (0:00:00.910) 0:02:09.546 ********* 2026-03-31 02:21:39.503738 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:21:39.503746 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:21:39.503755 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:21:39.503763 | orchestrator | 2026-03-31 02:21:39.503772 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-03-31 02:21:39.503781 | orchestrator | Tuesday 31 March 2026 02:21:30 +0000 (0:00:01.225) 0:02:10.771 ********* 2026-03-31 02:21:39.503789 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:21:39.503798 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:21:39.503806 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:21:39.503815 | orchestrator | 2026-03-31 02:21:39.503823 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-03-31 02:21:39.503832 | orchestrator | Tuesday 31 March 2026 02:21:31 +0000 (0:00:00.306) 0:02:11.078 ********* 2026-03-31 02:21:39.503840 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:21:39.503849 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:21:39.503857 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:21:39.503866 | orchestrator | 2026-03-31 02:21:39.503874 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-03-31 02:21:39.503883 | orchestrator | Tuesday 31 March 2026 02:21:31 +0000 (0:00:00.332) 0:02:11.411 ********* 2026-03-31 02:21:39.503892 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:21:39.503900 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:21:39.503909 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:21:39.503917 | orchestrator | 2026-03-31 02:21:39.503926 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-03-31 02:21:39.503934 | orchestrator | Tuesday 31 March 2026 02:21:32 +0000 (0:00:00.676) 0:02:12.088 ********* 2026-03-31 02:21:39.503953 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:21:39.503962 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:21:39.503992 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:21:39.504002 | orchestrator | 2026-03-31 02:21:39.504011 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-03-31 02:21:39.504022 | orchestrator | Tuesday 31 March 2026 02:21:33 +0000 (0:00:00.991) 0:02:13.079 ********* 2026-03-31 02:21:39.504031 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-31 02:21:39.504040 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-31 02:21:39.504048 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-31 02:21:39.504057 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-31 02:21:39.504065 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-31 02:21:39.504074 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-31 02:21:39.504082 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-31 02:21:39.504091 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-31 02:21:39.504100 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-31 02:21:39.504109 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-03-31 02:21:39.504117 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-31 02:21:39.504126 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-31 02:21:39.504161 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-31 02:21:39.504176 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-03-31 02:21:39.504185 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-31 02:21:39.504193 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-31 02:21:39.504202 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-31 02:21:39.504211 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-31 02:21:39.504221 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-31 02:21:39.504235 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-31 02:21:39.504248 | orchestrator | 2026-03-31 02:21:39.504271 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-03-31 02:21:39.504285 | orchestrator | 2026-03-31 02:21:39.504299 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-03-31 02:21:39.504312 | orchestrator | Tuesday 31 March 2026 02:21:36 +0000 (0:00:03.188) 0:02:16.268 ********* 2026-03-31 02:21:39.504326 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:21:39.504339 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:21:39.504352 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:21:39.504364 | orchestrator | 2026-03-31 02:21:39.504396 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-03-31 02:21:39.504411 | orchestrator | Tuesday 31 March 2026 02:21:36 +0000 (0:00:00.328) 0:02:16.596 ********* 2026-03-31 02:21:39.504425 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:21:39.504439 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:21:39.504455 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:21:39.504482 | orchestrator | 2026-03-31 02:21:39.504493 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-03-31 02:21:39.504501 | orchestrator | Tuesday 31 March 2026 02:21:37 +0000 (0:00:00.964) 0:02:17.561 ********* 2026-03-31 02:21:39.504510 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:21:39.504518 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:21:39.504527 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:21:39.504535 | orchestrator | 2026-03-31 02:21:39.504544 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-03-31 02:21:39.504553 | orchestrator | Tuesday 31 March 2026 02:21:37 +0000 (0:00:00.371) 0:02:17.933 ********* 2026-03-31 02:21:39.504561 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 02:21:39.504570 | orchestrator | 2026-03-31 02:21:39.504579 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-03-31 02:21:39.504588 | orchestrator | Tuesday 31 March 2026 02:21:38 +0000 (0:00:00.495) 0:02:18.429 ********* 2026-03-31 02:21:39.504596 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:21:39.504605 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:21:39.504613 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:21:39.504622 | orchestrator | 2026-03-31 02:21:39.504631 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-03-31 02:21:39.504639 | orchestrator | Tuesday 31 March 2026 02:21:38 +0000 (0:00:00.605) 0:02:19.034 ********* 2026-03-31 02:21:39.504648 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:21:39.504656 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:21:39.504671 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:21:39.504694 | orchestrator | 2026-03-31 02:21:39.504708 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-03-31 02:21:39.504722 | orchestrator | Tuesday 31 March 2026 02:21:39 +0000 (0:00:00.322) 0:02:19.357 ********* 2026-03-31 02:21:39.504749 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:23:24.195989 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:23:24.196092 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:23:24.196106 | orchestrator | 2026-03-31 02:23:24.196118 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-03-31 02:23:24.196129 | orchestrator | Tuesday 31 March 2026 02:21:39 +0000 (0:00:00.317) 0:02:19.675 ********* 2026-03-31 02:23:24.196139 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:23:24.196149 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:23:24.196159 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:23:24.196168 | orchestrator | 2026-03-31 02:23:24.196178 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-03-31 02:23:24.196184 | orchestrator | Tuesday 31 March 2026 02:21:40 +0000 (0:00:00.681) 0:02:20.356 ********* 2026-03-31 02:23:24.196190 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:23:24.196196 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:23:24.196201 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:23:24.196207 | orchestrator | 2026-03-31 02:23:24.196212 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-03-31 02:23:24.196218 | orchestrator | Tuesday 31 March 2026 02:21:41 +0000 (0:00:01.662) 0:02:22.019 ********* 2026-03-31 02:23:24.196224 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:23:24.196229 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:23:24.196234 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:23:24.196240 | orchestrator | 2026-03-31 02:23:24.196245 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-03-31 02:23:24.196251 | orchestrator | Tuesday 31 March 2026 02:21:43 +0000 (0:00:01.302) 0:02:23.321 ********* 2026-03-31 02:23:24.196256 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:23:24.196262 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:23:24.196267 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:23:24.196273 | orchestrator | 2026-03-31 02:23:24.196337 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-31 02:23:24.196364 | orchestrator | 2026-03-31 02:23:24.196370 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-31 02:23:24.196375 | orchestrator | Tuesday 31 March 2026 02:21:53 +0000 (0:00:10.325) 0:02:33.647 ********* 2026-03-31 02:23:24.196381 | orchestrator | ok: [testbed-manager] 2026-03-31 02:23:24.196387 | orchestrator | 2026-03-31 02:23:24.196393 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-31 02:23:24.196398 | orchestrator | Tuesday 31 March 2026 02:21:54 +0000 (0:00:00.850) 0:02:34.497 ********* 2026-03-31 02:23:24.196404 | orchestrator | changed: [testbed-manager] 2026-03-31 02:23:24.196409 | orchestrator | 2026-03-31 02:23:24.196415 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-31 02:23:24.196420 | orchestrator | Tuesday 31 March 2026 02:21:55 +0000 (0:00:00.734) 0:02:35.231 ********* 2026-03-31 02:23:24.196426 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-31 02:23:24.196431 | orchestrator | 2026-03-31 02:23:24.196437 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-31 02:23:24.196442 | orchestrator | Tuesday 31 March 2026 02:21:55 +0000 (0:00:00.574) 0:02:35.806 ********* 2026-03-31 02:23:24.196448 | orchestrator | changed: [testbed-manager] 2026-03-31 02:23:24.196453 | orchestrator | 2026-03-31 02:23:24.196458 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-31 02:23:24.196464 | orchestrator | Tuesday 31 March 2026 02:21:56 +0000 (0:00:00.941) 0:02:36.747 ********* 2026-03-31 02:23:24.196469 | orchestrator | changed: [testbed-manager] 2026-03-31 02:23:24.196474 | orchestrator | 2026-03-31 02:23:24.196480 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-31 02:23:24.196485 | orchestrator | Tuesday 31 March 2026 02:21:57 +0000 (0:00:00.677) 0:02:37.425 ********* 2026-03-31 02:23:24.196490 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-31 02:23:24.196496 | orchestrator | 2026-03-31 02:23:24.196502 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-31 02:23:24.196507 | orchestrator | Tuesday 31 March 2026 02:21:59 +0000 (0:00:01.666) 0:02:39.092 ********* 2026-03-31 02:23:24.196512 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-31 02:23:24.196518 | orchestrator | 2026-03-31 02:23:24.196540 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-31 02:23:24.196547 | orchestrator | Tuesday 31 March 2026 02:21:59 +0000 (0:00:00.934) 0:02:40.026 ********* 2026-03-31 02:23:24.196553 | orchestrator | changed: [testbed-manager] 2026-03-31 02:23:24.196559 | orchestrator | 2026-03-31 02:23:24.196565 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-31 02:23:24.196571 | orchestrator | Tuesday 31 March 2026 02:22:00 +0000 (0:00:00.475) 0:02:40.501 ********* 2026-03-31 02:23:24.196577 | orchestrator | changed: [testbed-manager] 2026-03-31 02:23:24.196583 | orchestrator | 2026-03-31 02:23:24.196589 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-03-31 02:23:24.196596 | orchestrator | 2026-03-31 02:23:24.196602 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-03-31 02:23:24.196609 | orchestrator | Tuesday 31 March 2026 02:22:00 +0000 (0:00:00.466) 0:02:40.968 ********* 2026-03-31 02:23:24.196615 | orchestrator | ok: [testbed-manager] 2026-03-31 02:23:24.196621 | orchestrator | 2026-03-31 02:23:24.196627 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-03-31 02:23:24.196633 | orchestrator | Tuesday 31 March 2026 02:22:01 +0000 (0:00:00.436) 0:02:41.404 ********* 2026-03-31 02:23:24.196639 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-03-31 02:23:24.196646 | orchestrator | 2026-03-31 02:23:24.196652 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-03-31 02:23:24.196658 | orchestrator | Tuesday 31 March 2026 02:22:01 +0000 (0:00:00.253) 0:02:41.657 ********* 2026-03-31 02:23:24.196664 | orchestrator | ok: [testbed-manager] 2026-03-31 02:23:24.196670 | orchestrator | 2026-03-31 02:23:24.196681 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-03-31 02:23:24.196688 | orchestrator | Tuesday 31 March 2026 02:22:02 +0000 (0:00:00.916) 0:02:42.573 ********* 2026-03-31 02:23:24.196694 | orchestrator | ok: [testbed-manager] 2026-03-31 02:23:24.196700 | orchestrator | 2026-03-31 02:23:24.196720 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-03-31 02:23:24.196727 | orchestrator | Tuesday 31 March 2026 02:22:04 +0000 (0:00:01.911) 0:02:44.485 ********* 2026-03-31 02:23:24.196733 | orchestrator | changed: [testbed-manager] 2026-03-31 02:23:24.196739 | orchestrator | 2026-03-31 02:23:24.196746 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-03-31 02:23:24.196752 | orchestrator | Tuesday 31 March 2026 02:22:06 +0000 (0:00:01.918) 0:02:46.403 ********* 2026-03-31 02:23:24.196758 | orchestrator | ok: [testbed-manager] 2026-03-31 02:23:24.196763 | orchestrator | 2026-03-31 02:23:24.196768 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-03-31 02:23:24.196774 | orchestrator | Tuesday 31 March 2026 02:22:06 +0000 (0:00:00.487) 0:02:46.891 ********* 2026-03-31 02:23:24.196779 | orchestrator | changed: [testbed-manager] 2026-03-31 02:23:24.196784 | orchestrator | 2026-03-31 02:23:24.196789 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-03-31 02:23:24.196795 | orchestrator | Tuesday 31 March 2026 02:22:15 +0000 (0:00:08.806) 0:02:55.697 ********* 2026-03-31 02:23:24.196800 | orchestrator | changed: [testbed-manager] 2026-03-31 02:23:24.196805 | orchestrator | 2026-03-31 02:23:24.196811 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-03-31 02:23:24.196816 | orchestrator | Tuesday 31 March 2026 02:22:29 +0000 (0:00:13.361) 0:03:09.059 ********* 2026-03-31 02:23:24.196821 | orchestrator | ok: [testbed-manager] 2026-03-31 02:23:24.196827 | orchestrator | 2026-03-31 02:23:24.196832 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-03-31 02:23:24.196837 | orchestrator | 2026-03-31 02:23:24.196842 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-03-31 02:23:24.196848 | orchestrator | Tuesday 31 March 2026 02:22:29 +0000 (0:00:00.875) 0:03:09.935 ********* 2026-03-31 02:23:24.196853 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:23:24.196858 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:23:24.196864 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:23:24.196869 | orchestrator | 2026-03-31 02:23:24.196874 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-03-31 02:23:24.196880 | orchestrator | Tuesday 31 March 2026 02:22:30 +0000 (0:00:00.370) 0:03:10.305 ********* 2026-03-31 02:23:24.196885 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:23:24.196890 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:23:24.196896 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:23:24.196901 | orchestrator | 2026-03-31 02:23:24.196906 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-03-31 02:23:24.196912 | orchestrator | Tuesday 31 March 2026 02:22:30 +0000 (0:00:00.326) 0:03:10.632 ********* 2026-03-31 02:23:24.196917 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:23:24.196923 | orchestrator | 2026-03-31 02:23:24.196928 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-03-31 02:23:24.196934 | orchestrator | Tuesday 31 March 2026 02:22:31 +0000 (0:00:00.797) 0:03:11.430 ********* 2026-03-31 02:23:24.196939 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-31 02:23:24.196944 | orchestrator | 2026-03-31 02:23:24.196950 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-03-31 02:23:24.196955 | orchestrator | Tuesday 31 March 2026 02:22:32 +0000 (0:00:00.920) 0:03:12.350 ********* 2026-03-31 02:23:24.196960 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-31 02:23:24.196965 | orchestrator | 2026-03-31 02:23:24.196971 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-03-31 02:23:24.196981 | orchestrator | Tuesday 31 March 2026 02:22:34 +0000 (0:00:01.887) 0:03:14.237 ********* 2026-03-31 02:23:24.196986 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:23:24.196991 | orchestrator | 2026-03-31 02:23:24.196996 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-03-31 02:23:24.197002 | orchestrator | Tuesday 31 March 2026 02:22:34 +0000 (0:00:00.138) 0:03:14.376 ********* 2026-03-31 02:23:24.197007 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-31 02:23:24.197012 | orchestrator | 2026-03-31 02:23:24.197018 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-03-31 02:23:24.197023 | orchestrator | Tuesday 31 March 2026 02:22:35 +0000 (0:00:01.016) 0:03:15.392 ********* 2026-03-31 02:23:24.197028 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:23:24.197033 | orchestrator | 2026-03-31 02:23:24.197039 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-03-31 02:23:24.197044 | orchestrator | Tuesday 31 March 2026 02:22:35 +0000 (0:00:00.119) 0:03:15.512 ********* 2026-03-31 02:23:24.197049 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:23:24.197054 | orchestrator | 2026-03-31 02:23:24.197060 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-03-31 02:23:24.197065 | orchestrator | Tuesday 31 March 2026 02:22:35 +0000 (0:00:00.133) 0:03:15.645 ********* 2026-03-31 02:23:24.197070 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:23:24.197075 | orchestrator | 2026-03-31 02:23:24.197081 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-03-31 02:23:24.197090 | orchestrator | Tuesday 31 March 2026 02:22:35 +0000 (0:00:00.123) 0:03:15.769 ********* 2026-03-31 02:23:24.197095 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:23:24.197101 | orchestrator | 2026-03-31 02:23:24.197106 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-03-31 02:23:24.197111 | orchestrator | Tuesday 31 March 2026 02:22:35 +0000 (0:00:00.145) 0:03:15.915 ********* 2026-03-31 02:23:24.197117 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-31 02:23:24.197122 | orchestrator | 2026-03-31 02:23:24.197127 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-03-31 02:23:24.197133 | orchestrator | Tuesday 31 March 2026 02:22:41 +0000 (0:00:05.513) 0:03:21.428 ********* 2026-03-31 02:23:24.197138 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-03-31 02:23:24.197143 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-03-31 02:23:24.197153 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-03-31 02:23:49.233788 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-03-31 02:23:49.233904 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-03-31 02:23:49.233920 | orchestrator | 2026-03-31 02:23:49.233932 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-03-31 02:23:49.233943 | orchestrator | Tuesday 31 March 2026 02:23:24 +0000 (0:00:42.808) 0:04:04.236 ********* 2026-03-31 02:23:49.233953 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-31 02:23:49.233963 | orchestrator | 2026-03-31 02:23:49.233973 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-03-31 02:23:49.233983 | orchestrator | Tuesday 31 March 2026 02:23:25 +0000 (0:00:01.339) 0:04:05.576 ********* 2026-03-31 02:23:49.233994 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-31 02:23:49.234003 | orchestrator | 2026-03-31 02:23:49.234013 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-03-31 02:23:49.234102 | orchestrator | Tuesday 31 March 2026 02:23:27 +0000 (0:00:01.633) 0:04:07.210 ********* 2026-03-31 02:23:49.234120 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-31 02:23:49.234138 | orchestrator | 2026-03-31 02:23:49.234156 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-03-31 02:23:49.234168 | orchestrator | Tuesday 31 March 2026 02:23:28 +0000 (0:00:01.432) 0:04:08.642 ********* 2026-03-31 02:23:49.234201 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:23:49.234211 | orchestrator | 2026-03-31 02:23:49.234221 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-03-31 02:23:49.234230 | orchestrator | Tuesday 31 March 2026 02:23:28 +0000 (0:00:00.153) 0:04:08.795 ********* 2026-03-31 02:23:49.234240 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-03-31 02:23:49.234251 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-03-31 02:23:49.234262 | orchestrator | 2026-03-31 02:23:49.234274 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-03-31 02:23:49.234285 | orchestrator | Tuesday 31 March 2026 02:23:30 +0000 (0:00:01.989) 0:04:10.785 ********* 2026-03-31 02:23:49.234296 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:23:49.234307 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:23:49.234376 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:23:49.234387 | orchestrator | 2026-03-31 02:23:49.234398 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-03-31 02:23:49.234409 | orchestrator | Tuesday 31 March 2026 02:23:31 +0000 (0:00:00.321) 0:04:11.106 ********* 2026-03-31 02:23:49.234420 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:23:49.234431 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:23:49.234441 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:23:49.234452 | orchestrator | 2026-03-31 02:23:49.234463 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-03-31 02:23:49.234474 | orchestrator | 2026-03-31 02:23:49.234485 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-03-31 02:23:49.234496 | orchestrator | Tuesday 31 March 2026 02:23:32 +0000 (0:00:00.971) 0:04:12.077 ********* 2026-03-31 02:23:49.234507 | orchestrator | ok: [testbed-manager] 2026-03-31 02:23:49.234518 | orchestrator | 2026-03-31 02:23:49.234529 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-03-31 02:23:49.234540 | orchestrator | Tuesday 31 March 2026 02:23:32 +0000 (0:00:00.373) 0:04:12.451 ********* 2026-03-31 02:23:49.234550 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-03-31 02:23:49.234561 | orchestrator | 2026-03-31 02:23:49.234573 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-03-31 02:23:49.234584 | orchestrator | Tuesday 31 March 2026 02:23:32 +0000 (0:00:00.235) 0:04:12.686 ********* 2026-03-31 02:23:49.234595 | orchestrator | changed: [testbed-manager] 2026-03-31 02:23:49.234606 | orchestrator | 2026-03-31 02:23:49.234617 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-03-31 02:23:49.234627 | orchestrator | 2026-03-31 02:23:49.234637 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-03-31 02:23:49.234646 | orchestrator | Tuesday 31 March 2026 02:23:38 +0000 (0:00:05.909) 0:04:18.596 ********* 2026-03-31 02:23:49.234656 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:23:49.234665 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:23:49.234675 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:23:49.234685 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:23:49.234694 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:23:49.234703 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:23:49.234713 | orchestrator | 2026-03-31 02:23:49.234722 | orchestrator | TASK [Manage labels] *********************************************************** 2026-03-31 02:23:49.234732 | orchestrator | Tuesday 31 March 2026 02:23:39 +0000 (0:00:00.622) 0:04:19.219 ********* 2026-03-31 02:23:49.234745 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-31 02:23:49.234761 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-31 02:23:49.234777 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-31 02:23:49.234792 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-31 02:23:49.234823 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-31 02:23:49.234839 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-31 02:23:49.234853 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-31 02:23:49.234871 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-31 02:23:49.234887 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-31 02:23:49.234927 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-31 02:23:49.234945 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-31 02:23:49.234963 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-31 02:23:49.234980 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-31 02:23:49.234995 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-31 02:23:49.235009 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-31 02:23:49.235036 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-31 02:23:49.235046 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-31 02:23:49.235055 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-31 02:23:49.235065 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-31 02:23:49.235074 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-31 02:23:49.235084 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-31 02:23:49.235093 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-31 02:23:49.235103 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-31 02:23:49.235112 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-31 02:23:49.235122 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-31 02:23:49.235131 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-31 02:23:49.235141 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-31 02:23:49.235151 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-31 02:23:49.235160 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-31 02:23:49.235170 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-31 02:23:49.235179 | orchestrator | 2026-03-31 02:23:49.235188 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-03-31 02:23:49.235198 | orchestrator | Tuesday 31 March 2026 02:23:47 +0000 (0:00:08.771) 0:04:27.990 ********* 2026-03-31 02:23:49.235207 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:23:49.235217 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:23:49.235226 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:23:49.235236 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:23:49.235246 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:23:49.235255 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:23:49.235264 | orchestrator | 2026-03-31 02:23:49.235274 | orchestrator | TASK [Manage taints] *********************************************************** 2026-03-31 02:23:49.235283 | orchestrator | Tuesday 31 March 2026 02:23:48 +0000 (0:00:00.555) 0:04:28.546 ********* 2026-03-31 02:23:49.235293 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:23:49.235341 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:23:49.235352 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:23:49.235362 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:23:49.235371 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:23:49.235380 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:23:49.235390 | orchestrator | 2026-03-31 02:23:49.235400 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 02:23:49.235409 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 02:23:49.235421 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-31 02:23:49.235431 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-31 02:23:49.235441 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-31 02:23:49.235452 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-31 02:23:49.235469 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-31 02:23:49.235484 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-31 02:23:49.235501 | orchestrator | 2026-03-31 02:23:49.235517 | orchestrator | 2026-03-31 02:23:49.235534 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 02:23:49.235550 | orchestrator | Tuesday 31 March 2026 02:23:49 +0000 (0:00:00.703) 0:04:29.250 ********* 2026-03-31 02:23:49.235578 | orchestrator | =============================================================================== 2026-03-31 02:23:49.868909 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 53.97s 2026-03-31 02:23:49.869014 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.81s 2026-03-31 02:23:49.869029 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 24.09s 2026-03-31 02:23:49.869040 | orchestrator | kubectl : Install required packages ------------------------------------ 13.36s 2026-03-31 02:23:49.869051 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.33s 2026-03-31 02:23:49.869061 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 8.81s 2026-03-31 02:23:49.869071 | orchestrator | Manage labels ----------------------------------------------------------- 8.77s 2026-03-31 02:23:49.869082 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.91s 2026-03-31 02:23:49.869093 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.51s 2026-03-31 02:23:49.869103 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.44s 2026-03-31 02:23:49.869114 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.19s 2026-03-31 02:23:49.869127 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 3.14s 2026-03-31 02:23:49.869138 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.02s 2026-03-31 02:23:49.869148 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.00s 2026-03-31 02:23:49.869158 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.99s 2026-03-31 02:23:49.869177 | orchestrator | kubectl : Add repository gpg key ---------------------------------------- 1.92s 2026-03-31 02:23:49.869204 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.91s 2026-03-31 02:23:49.869264 | orchestrator | k3s_server_post : Wait for connectivity to kube VIP --------------------- 1.89s 2026-03-31 02:23:49.869283 | orchestrator | k3s_server : Stop k3s-init ---------------------------------------------- 1.88s 2026-03-31 02:23:49.869301 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 1.82s 2026-03-31 02:23:50.274378 | orchestrator | + osism apply copy-kubeconfig 2026-03-31 02:24:02.494341 | orchestrator | 2026-03-31 02:24:02 | INFO  | Task ee0db7f3-4934-4145-89a7-f010320935b3 (copy-kubeconfig) was prepared for execution. 2026-03-31 02:24:02.494566 | orchestrator | 2026-03-31 02:24:02 | INFO  | It takes a moment until task ee0db7f3-4934-4145-89a7-f010320935b3 (copy-kubeconfig) has been started and output is visible here. 2026-03-31 02:24:10.016433 | orchestrator | 2026-03-31 02:24:10.016560 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-03-31 02:24:10.016580 | orchestrator | 2026-03-31 02:24:10.016592 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-31 02:24:10.016604 | orchestrator | Tuesday 31 March 2026 02:24:07 +0000 (0:00:00.162) 0:00:00.162 ********* 2026-03-31 02:24:10.016616 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-31 02:24:10.016629 | orchestrator | 2026-03-31 02:24:10.016640 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-31 02:24:10.016651 | orchestrator | Tuesday 31 March 2026 02:24:07 +0000 (0:00:00.763) 0:00:00.925 ********* 2026-03-31 02:24:10.016687 | orchestrator | changed: [testbed-manager] 2026-03-31 02:24:10.016700 | orchestrator | 2026-03-31 02:24:10.016711 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-03-31 02:24:10.016722 | orchestrator | Tuesday 31 March 2026 02:24:09 +0000 (0:00:01.300) 0:00:02.226 ********* 2026-03-31 02:24:10.016740 | orchestrator | changed: [testbed-manager] 2026-03-31 02:24:10.016751 | orchestrator | 2026-03-31 02:24:10.016768 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 02:24:10.016778 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 02:24:10.016790 | orchestrator | 2026-03-31 02:24:10.016800 | orchestrator | 2026-03-31 02:24:10.016810 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 02:24:10.016819 | orchestrator | Tuesday 31 March 2026 02:24:09 +0000 (0:00:00.502) 0:00:02.729 ********* 2026-03-31 02:24:10.016829 | orchestrator | =============================================================================== 2026-03-31 02:24:10.016839 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.30s 2026-03-31 02:24:10.016850 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.76s 2026-03-31 02:24:10.016861 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.50s 2026-03-31 02:24:10.398417 | orchestrator | + sh -c /opt/configuration/scripts/deploy/200-infrastructure.sh 2026-03-31 02:24:22.714114 | orchestrator | 2026-03-31 02:24:22 | INFO  | Task e337efa7-b4bc-4559-a12b-7d456e45a755 (openstackclient) was prepared for execution. 2026-03-31 02:24:22.714226 | orchestrator | 2026-03-31 02:24:22 | INFO  | It takes a moment until task e337efa7-b4bc-4559-a12b-7d456e45a755 (openstackclient) has been started and output is visible here. 2026-03-31 02:25:12.226761 | orchestrator | 2026-03-31 02:25:12.226909 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-03-31 02:25:12.226935 | orchestrator | 2026-03-31 02:25:12.226955 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-03-31 02:25:12.226974 | orchestrator | Tuesday 31 March 2026 02:24:27 +0000 (0:00:00.242) 0:00:00.242 ********* 2026-03-31 02:25:12.226993 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-03-31 02:25:12.227086 | orchestrator | 2026-03-31 02:25:12.227147 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-03-31 02:25:12.227167 | orchestrator | Tuesday 31 March 2026 02:24:27 +0000 (0:00:00.246) 0:00:00.488 ********* 2026-03-31 02:25:12.227185 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-03-31 02:25:12.227229 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-03-31 02:25:12.227248 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-03-31 02:25:12.227266 | orchestrator | 2026-03-31 02:25:12.227284 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-03-31 02:25:12.227350 | orchestrator | Tuesday 31 March 2026 02:24:28 +0000 (0:00:01.334) 0:00:01.823 ********* 2026-03-31 02:25:12.227369 | orchestrator | changed: [testbed-manager] 2026-03-31 02:25:12.227387 | orchestrator | 2026-03-31 02:25:12.227404 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-03-31 02:25:12.227420 | orchestrator | Tuesday 31 March 2026 02:24:30 +0000 (0:00:01.583) 0:00:03.406 ********* 2026-03-31 02:25:12.227438 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-03-31 02:25:12.227456 | orchestrator | ok: [testbed-manager] 2026-03-31 02:25:12.227475 | orchestrator | 2026-03-31 02:25:12.227493 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-03-31 02:25:12.227510 | orchestrator | Tuesday 31 March 2026 02:25:06 +0000 (0:00:36.667) 0:00:40.074 ********* 2026-03-31 02:25:12.227528 | orchestrator | changed: [testbed-manager] 2026-03-31 02:25:12.227545 | orchestrator | 2026-03-31 02:25:12.227562 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-03-31 02:25:12.227580 | orchestrator | Tuesday 31 March 2026 02:25:07 +0000 (0:00:00.940) 0:00:41.015 ********* 2026-03-31 02:25:12.227598 | orchestrator | ok: [testbed-manager] 2026-03-31 02:25:12.227616 | orchestrator | 2026-03-31 02:25:12.227633 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-03-31 02:25:12.227650 | orchestrator | Tuesday 31 March 2026 02:25:08 +0000 (0:00:00.637) 0:00:41.652 ********* 2026-03-31 02:25:12.227667 | orchestrator | changed: [testbed-manager] 2026-03-31 02:25:12.227684 | orchestrator | 2026-03-31 02:25:12.227702 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-03-31 02:25:12.227721 | orchestrator | Tuesday 31 March 2026 02:25:09 +0000 (0:00:01.438) 0:00:43.091 ********* 2026-03-31 02:25:12.227740 | orchestrator | changed: [testbed-manager] 2026-03-31 02:25:12.227758 | orchestrator | 2026-03-31 02:25:12.227774 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-03-31 02:25:12.227790 | orchestrator | Tuesday 31 March 2026 02:25:10 +0000 (0:00:00.730) 0:00:43.821 ********* 2026-03-31 02:25:12.227808 | orchestrator | changed: [testbed-manager] 2026-03-31 02:25:12.227827 | orchestrator | 2026-03-31 02:25:12.227845 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-03-31 02:25:12.227863 | orchestrator | Tuesday 31 March 2026 02:25:11 +0000 (0:00:00.604) 0:00:44.426 ********* 2026-03-31 02:25:12.227882 | orchestrator | ok: [testbed-manager] 2026-03-31 02:25:12.227900 | orchestrator | 2026-03-31 02:25:12.227918 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 02:25:12.227938 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 02:25:12.227959 | orchestrator | 2026-03-31 02:25:12.227976 | orchestrator | 2026-03-31 02:25:12.228010 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 02:25:12.228027 | orchestrator | Tuesday 31 March 2026 02:25:11 +0000 (0:00:00.434) 0:00:44.860 ********* 2026-03-31 02:25:12.228048 | orchestrator | =============================================================================== 2026-03-31 02:25:12.228066 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 36.67s 2026-03-31 02:25:12.228084 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.58s 2026-03-31 02:25:12.228124 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.44s 2026-03-31 02:25:12.228142 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.33s 2026-03-31 02:25:12.228160 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.94s 2026-03-31 02:25:12.228179 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.73s 2026-03-31 02:25:12.228199 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.64s 2026-03-31 02:25:12.228218 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.60s 2026-03-31 02:25:12.228238 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.43s 2026-03-31 02:25:12.228258 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.25s 2026-03-31 02:25:14.649441 | orchestrator | 2026-03-31 02:25:14 | INFO  | Task f2f170a1-6928-4eb7-b6d7-5fbacb0c0cb8 (common) was prepared for execution. 2026-03-31 02:25:14.649515 | orchestrator | 2026-03-31 02:25:14 | INFO  | It takes a moment until task f2f170a1-6928-4eb7-b6d7-5fbacb0c0cb8 (common) has been started and output is visible here. 2026-03-31 02:25:27.346000 | orchestrator | 2026-03-31 02:25:27.346234 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-31 02:25:27.346266 | orchestrator | 2026-03-31 02:25:27.346289 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-31 02:25:27.346340 | orchestrator | Tuesday 31 March 2026 02:25:19 +0000 (0:00:00.315) 0:00:00.315 ********* 2026-03-31 02:25:27.346362 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 02:25:27.346383 | orchestrator | 2026-03-31 02:25:27.346402 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-31 02:25:27.346420 | orchestrator | Tuesday 31 March 2026 02:25:20 +0000 (0:00:01.362) 0:00:01.677 ********* 2026-03-31 02:25:27.346439 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-31 02:25:27.346453 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-31 02:25:27.346465 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-31 02:25:27.346476 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-31 02:25:27.346487 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-31 02:25:27.346497 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-31 02:25:27.346508 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-31 02:25:27.346520 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-31 02:25:27.346533 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-31 02:25:27.346565 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-31 02:25:27.346578 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-31 02:25:27.346592 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-31 02:25:27.346604 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-31 02:25:27.346615 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-31 02:25:27.346625 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-31 02:25:27.346636 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-31 02:25:27.346647 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-31 02:25:27.346679 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-31 02:25:27.346690 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-31 02:25:27.346701 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-31 02:25:27.346712 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-31 02:25:27.346722 | orchestrator | 2026-03-31 02:25:27.346733 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-31 02:25:27.346744 | orchestrator | Tuesday 31 March 2026 02:25:22 +0000 (0:00:02.578) 0:00:04.256 ********* 2026-03-31 02:25:27.346755 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 02:25:27.346767 | orchestrator | 2026-03-31 02:25:27.346778 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-31 02:25:27.346793 | orchestrator | Tuesday 31 March 2026 02:25:24 +0000 (0:00:01.417) 0:00:05.674 ********* 2026-03-31 02:25:27.346808 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 02:25:27.346822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 02:25:27.346865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 02:25:27.346878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 02:25:27.346890 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 02:25:27.346901 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 02:25:27.346920 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 02:25:27.346932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:25:27.346943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:25:27.346971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:25:28.329033 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:25:28.329134 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:25:28.329169 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:25:28.329182 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:25:28.329194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:25:28.329275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:25:28.329289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:25:28.329397 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:25:28.329415 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:25:28.329429 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:25:28.329451 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:25:28.329465 | orchestrator | 2026-03-31 02:25:28.329478 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-31 02:25:28.329492 | orchestrator | Tuesday 31 March 2026 02:25:27 +0000 (0:00:03.574) 0:00:09.248 ********* 2026-03-31 02:25:28.329508 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-31 02:25:28.329522 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:25:28.329535 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:25:28.329548 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:25:28.329562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-31 02:25:28.329590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:25:29.006584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:25:29.006737 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:25:29.006828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-31 02:25:29.006851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:25:29.006869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:25:29.006880 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:25:29.006891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-31 02:25:29.006912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:25:29.006923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:25:29.006933 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:25:29.006968 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-31 02:25:29.006999 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:25:29.007016 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:25:29.007035 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:25:29.007052 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-31 02:25:29.007070 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:25:29.007086 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:25:29.007097 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:25:29.007109 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-31 02:25:29.007130 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:25:30.076546 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:25:30.076666 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:25:30.076682 | orchestrator | 2026-03-31 02:25:30.076693 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-31 02:25:30.076704 | orchestrator | Tuesday 31 March 2026 02:25:28 +0000 (0:00:01.027) 0:00:10.276 ********* 2026-03-31 02:25:30.076754 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-31 02:25:30.076768 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:25:30.076778 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:25:30.076804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-31 02:25:30.076818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:25:30.076846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:25:30.076857 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:25:30.076872 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:25:30.076923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-31 02:25:30.076943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:25:30.076957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:25:30.076970 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:25:30.076985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-31 02:25:30.076999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:25:30.077021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:25:30.077048 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:25:30.077065 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-31 02:25:30.077157 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:25:35.577198 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:25:35.577379 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:25:35.577412 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-31 02:25:35.577435 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:25:35.577453 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:25:35.577471 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:25:35.577487 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-31 02:25:35.577606 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:25:35.577630 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:25:35.577647 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:25:35.577665 | orchestrator | 2026-03-31 02:25:35.577683 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-31 02:25:35.577701 | orchestrator | Tuesday 31 March 2026 02:25:31 +0000 (0:00:02.106) 0:00:12.382 ********* 2026-03-31 02:25:35.577718 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:25:35.577736 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:25:35.577753 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:25:35.577770 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:25:35.577810 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:25:35.577828 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:25:35.577845 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:25:35.577862 | orchestrator | 2026-03-31 02:25:35.577879 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-31 02:25:35.577895 | orchestrator | Tuesday 31 March 2026 02:25:31 +0000 (0:00:00.715) 0:00:13.098 ********* 2026-03-31 02:25:35.577912 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:25:35.577928 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:25:35.577943 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:25:35.577959 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:25:35.577976 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:25:35.577994 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:25:35.578010 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:25:35.578111 | orchestrator | 2026-03-31 02:25:35.578129 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-31 02:25:35.578147 | orchestrator | Tuesday 31 March 2026 02:25:32 +0000 (0:00:01.059) 0:00:14.157 ********* 2026-03-31 02:25:35.578166 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 02:25:35.578208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 02:25:35.578241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 02:25:35.578267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 02:25:35.578286 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 02:25:35.578305 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 02:25:35.578368 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 02:25:38.415010 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:25:38.415113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:25:38.415149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:25:38.415174 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:25:38.415185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:25:38.415195 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:25:38.415233 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:25:38.415246 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:25:38.415259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:25:38.415276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:25:38.415286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:25:38.415296 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:25:38.415306 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:25:38.415339 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:25:38.415351 | orchestrator | 2026-03-31 02:25:38.415362 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-31 02:25:38.415373 | orchestrator | Tuesday 31 March 2026 02:25:36 +0000 (0:00:03.499) 0:00:17.656 ********* 2026-03-31 02:25:38.415384 | orchestrator | [WARNING]: Skipped 2026-03-31 02:25:38.415394 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-31 02:25:38.415405 | orchestrator | to this access issue: 2026-03-31 02:25:38.415415 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-31 02:25:38.415425 | orchestrator | directory 2026-03-31 02:25:38.415435 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-31 02:25:38.415445 | orchestrator | 2026-03-31 02:25:38.415455 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-31 02:25:38.415464 | orchestrator | Tuesday 31 March 2026 02:25:37 +0000 (0:00:01.010) 0:00:18.667 ********* 2026-03-31 02:25:38.415474 | orchestrator | [WARNING]: Skipped 2026-03-31 02:25:38.415489 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-31 02:25:48.484534 | orchestrator | to this access issue: 2026-03-31 02:25:48.484667 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-31 02:25:48.484693 | orchestrator | directory 2026-03-31 02:25:48.484711 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-31 02:25:48.484730 | orchestrator | 2026-03-31 02:25:48.484749 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-31 02:25:48.484767 | orchestrator | Tuesday 31 March 2026 02:25:38 +0000 (0:00:01.314) 0:00:19.982 ********* 2026-03-31 02:25:48.484813 | orchestrator | [WARNING]: Skipped 2026-03-31 02:25:48.484830 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-31 02:25:48.484847 | orchestrator | to this access issue: 2026-03-31 02:25:48.484864 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-31 02:25:48.484881 | orchestrator | directory 2026-03-31 02:25:48.484897 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-31 02:25:48.484914 | orchestrator | 2026-03-31 02:25:48.484932 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-31 02:25:48.484949 | orchestrator | Tuesday 31 March 2026 02:25:39 +0000 (0:00:00.896) 0:00:20.878 ********* 2026-03-31 02:25:48.484966 | orchestrator | [WARNING]: Skipped 2026-03-31 02:25:48.484984 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-31 02:25:48.485001 | orchestrator | to this access issue: 2026-03-31 02:25:48.485019 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-31 02:25:48.485035 | orchestrator | directory 2026-03-31 02:25:48.485053 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-31 02:25:48.485070 | orchestrator | 2026-03-31 02:25:48.485088 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-31 02:25:48.485106 | orchestrator | Tuesday 31 March 2026 02:25:40 +0000 (0:00:00.887) 0:00:21.766 ********* 2026-03-31 02:25:48.485124 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:25:48.485142 | orchestrator | changed: [testbed-manager] 2026-03-31 02:25:48.485160 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:25:48.485179 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:25:48.485196 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:25:48.485213 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:25:48.485251 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:25:48.485270 | orchestrator | 2026-03-31 02:25:48.485288 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-31 02:25:48.485304 | orchestrator | Tuesday 31 March 2026 02:25:43 +0000 (0:00:02.543) 0:00:24.310 ********* 2026-03-31 02:25:48.485320 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-31 02:25:48.485365 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-31 02:25:48.485383 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-31 02:25:48.485401 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-31 02:25:48.485419 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-31 02:25:48.485437 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-31 02:25:48.485464 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-31 02:25:48.485482 | orchestrator | 2026-03-31 02:25:48.485500 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-31 02:25:48.485519 | orchestrator | Tuesday 31 March 2026 02:25:45 +0000 (0:00:02.112) 0:00:26.422 ********* 2026-03-31 02:25:48.485537 | orchestrator | changed: [testbed-manager] 2026-03-31 02:25:48.485555 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:25:48.485573 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:25:48.485591 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:25:48.485609 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:25:48.485627 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:25:48.485645 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:25:48.485663 | orchestrator | 2026-03-31 02:25:48.485681 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-31 02:25:48.485713 | orchestrator | Tuesday 31 March 2026 02:25:47 +0000 (0:00:02.003) 0:00:28.426 ********* 2026-03-31 02:25:48.485735 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 02:25:48.485782 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:25:48.485800 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 02:25:48.485818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:25:48.485837 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 02:25:48.485862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:25:48.485880 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 02:25:48.485908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:25:48.486007 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:25:48.486128 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 02:25:54.487267 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:25:54.487441 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:25:54.487467 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 02:25:54.487502 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:25:54.487544 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:25:54.487559 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 02:25:54.487573 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:25:54.487609 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:25:54.487625 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:25:54.487639 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:25:54.487653 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:25:54.487667 | orchestrator | 2026-03-31 02:25:54.487682 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-31 02:25:54.487697 | orchestrator | Tuesday 31 March 2026 02:25:48 +0000 (0:00:01.577) 0:00:30.003 ********* 2026-03-31 02:25:54.487708 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-31 02:25:54.487717 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-31 02:25:54.487737 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-31 02:25:54.487746 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-31 02:25:54.487754 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-31 02:25:54.487762 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-31 02:25:54.487770 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-31 02:25:54.487779 | orchestrator | 2026-03-31 02:25:54.487788 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-31 02:25:54.487797 | orchestrator | Tuesday 31 March 2026 02:25:50 +0000 (0:00:02.004) 0:00:32.007 ********* 2026-03-31 02:25:54.487858 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-31 02:25:54.487868 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-31 02:25:54.487877 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-31 02:25:54.487896 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-31 02:25:54.487906 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-31 02:25:54.487914 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-31 02:25:54.487923 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-31 02:25:54.487932 | orchestrator | 2026-03-31 02:25:54.487940 | orchestrator | TASK [common : Check common containers] **************************************** 2026-03-31 02:25:54.487948 | orchestrator | Tuesday 31 March 2026 02:25:52 +0000 (0:00:01.738) 0:00:33.746 ********* 2026-03-31 02:25:54.487956 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 02:25:54.488018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 02:25:55.072592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 02:25:55.072726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 02:25:55.072792 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 02:25:55.072866 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 02:25:55.072887 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 02:25:55.072909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:25:55.072931 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:25:55.072984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:25:55.073007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:25:55.073048 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:25:55.073071 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:25:55.073092 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:25:55.073113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:25:55.073138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:25:55.073171 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:27:19.827255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:27:19.827502 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:27:19.827524 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:27:19.827550 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:27:19.827562 | orchestrator | 2026-03-31 02:27:19.827576 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-03-31 02:27:19.827588 | orchestrator | Tuesday 31 March 2026 02:25:55 +0000 (0:00:02.593) 0:00:36.339 ********* 2026-03-31 02:27:19.827600 | orchestrator | changed: [testbed-manager] 2026-03-31 02:27:19.827612 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:27:19.827623 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:27:19.827633 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:27:19.827644 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:27:19.827655 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:27:19.827666 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:27:19.827677 | orchestrator | 2026-03-31 02:27:19.827703 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-03-31 02:27:19.827725 | orchestrator | Tuesday 31 March 2026 02:25:56 +0000 (0:00:01.433) 0:00:37.773 ********* 2026-03-31 02:27:19.827737 | orchestrator | changed: [testbed-manager] 2026-03-31 02:27:19.827747 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:27:19.827758 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:27:19.827771 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:27:19.827783 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:27:19.827794 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:27:19.827807 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:27:19.827819 | orchestrator | 2026-03-31 02:27:19.827832 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-31 02:27:19.827845 | orchestrator | Tuesday 31 March 2026 02:25:57 +0000 (0:00:01.149) 0:00:38.923 ********* 2026-03-31 02:27:19.827857 | orchestrator | 2026-03-31 02:27:19.827869 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-31 02:27:19.827882 | orchestrator | Tuesday 31 March 2026 02:25:57 +0000 (0:00:00.074) 0:00:38.997 ********* 2026-03-31 02:27:19.827894 | orchestrator | 2026-03-31 02:27:19.827906 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-31 02:27:19.827918 | orchestrator | Tuesday 31 March 2026 02:25:57 +0000 (0:00:00.066) 0:00:39.063 ********* 2026-03-31 02:27:19.827930 | orchestrator | 2026-03-31 02:27:19.827943 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-31 02:27:19.827955 | orchestrator | Tuesday 31 March 2026 02:25:57 +0000 (0:00:00.064) 0:00:39.127 ********* 2026-03-31 02:27:19.827968 | orchestrator | 2026-03-31 02:27:19.827980 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-31 02:27:19.828005 | orchestrator | Tuesday 31 March 2026 02:25:58 +0000 (0:00:00.237) 0:00:39.365 ********* 2026-03-31 02:27:19.828017 | orchestrator | 2026-03-31 02:27:19.828030 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-31 02:27:19.828042 | orchestrator | Tuesday 31 March 2026 02:25:58 +0000 (0:00:00.085) 0:00:39.450 ********* 2026-03-31 02:27:19.828054 | orchestrator | 2026-03-31 02:27:19.828066 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-31 02:27:19.828079 | orchestrator | Tuesday 31 March 2026 02:25:58 +0000 (0:00:00.066) 0:00:39.517 ********* 2026-03-31 02:27:19.828091 | orchestrator | 2026-03-31 02:27:19.828104 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-31 02:27:19.828116 | orchestrator | Tuesday 31 March 2026 02:25:58 +0000 (0:00:00.135) 0:00:39.652 ********* 2026-03-31 02:27:19.828129 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:27:19.828142 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:27:19.828155 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:27:19.828167 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:27:19.828177 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:27:19.828206 | orchestrator | changed: [testbed-manager] 2026-03-31 02:27:19.828218 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:27:19.828229 | orchestrator | 2026-03-31 02:27:19.828240 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-03-31 02:27:19.828251 | orchestrator | Tuesday 31 March 2026 02:26:35 +0000 (0:00:37.133) 0:01:16.786 ********* 2026-03-31 02:27:19.828262 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:27:19.828272 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:27:19.828283 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:27:19.828293 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:27:19.828304 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:27:19.828315 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:27:19.828325 | orchestrator | changed: [testbed-manager] 2026-03-31 02:27:19.828336 | orchestrator | 2026-03-31 02:27:19.828347 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-03-31 02:27:19.828357 | orchestrator | Tuesday 31 March 2026 02:27:09 +0000 (0:00:34.351) 0:01:51.138 ********* 2026-03-31 02:27:19.828368 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:27:19.828380 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:27:19.828391 | orchestrator | ok: [testbed-manager] 2026-03-31 02:27:19.828401 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:27:19.828430 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:27:19.828442 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:27:19.828452 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:27:19.828463 | orchestrator | 2026-03-31 02:27:19.828473 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-03-31 02:27:19.828484 | orchestrator | Tuesday 31 March 2026 02:27:11 +0000 (0:00:01.911) 0:01:53.049 ********* 2026-03-31 02:27:19.828495 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:27:19.828506 | orchestrator | changed: [testbed-manager] 2026-03-31 02:27:19.828516 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:27:19.828527 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:27:19.828538 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:27:19.828548 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:27:19.828559 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:27:19.828570 | orchestrator | 2026-03-31 02:27:19.828580 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 02:27:19.828592 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-31 02:27:19.828604 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-31 02:27:19.828623 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-31 02:27:19.828641 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-31 02:27:19.828653 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-31 02:27:19.828664 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-31 02:27:19.828674 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-31 02:27:19.828685 | orchestrator | 2026-03-31 02:27:19.828696 | orchestrator | 2026-03-31 02:27:19.828707 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 02:27:19.828718 | orchestrator | Tuesday 31 March 2026 02:27:19 +0000 (0:00:08.025) 0:02:01.074 ********* 2026-03-31 02:27:19.828729 | orchestrator | =============================================================================== 2026-03-31 02:27:19.828740 | orchestrator | common : Restart fluentd container ------------------------------------- 37.13s 2026-03-31 02:27:19.828751 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 34.35s 2026-03-31 02:27:19.828762 | orchestrator | common : Restart cron container ----------------------------------------- 8.03s 2026-03-31 02:27:19.828772 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 3.57s 2026-03-31 02:27:19.828783 | orchestrator | common : Copying over config.json files for services -------------------- 3.50s 2026-03-31 02:27:19.828794 | orchestrator | common : Check common containers ---------------------------------------- 2.59s 2026-03-31 02:27:19.828804 | orchestrator | common : Ensuring config directories exist ------------------------------ 2.58s 2026-03-31 02:27:19.828815 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 2.54s 2026-03-31 02:27:19.828826 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.11s 2026-03-31 02:27:19.828836 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.11s 2026-03-31 02:27:19.828847 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.00s 2026-03-31 02:27:19.828858 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.00s 2026-03-31 02:27:19.828868 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.91s 2026-03-31 02:27:19.828879 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 1.74s 2026-03-31 02:27:19.828890 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.58s 2026-03-31 02:27:19.828901 | orchestrator | common : Creating log volume -------------------------------------------- 1.43s 2026-03-31 02:27:19.828919 | orchestrator | common : include_tasks -------------------------------------------------- 1.42s 2026-03-31 02:27:20.305553 | orchestrator | common : include_tasks -------------------------------------------------- 1.36s 2026-03-31 02:27:20.305643 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.31s 2026-03-31 02:27:20.305658 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.15s 2026-03-31 02:27:22.832915 | orchestrator | 2026-03-31 02:27:22 | INFO  | Task 7dc7e635-6585-4fa0-b7b7-0c26799949dc (loadbalancer) was prepared for execution. 2026-03-31 02:27:22.833164 | orchestrator | 2026-03-31 02:27:22 | INFO  | It takes a moment until task 7dc7e635-6585-4fa0-b7b7-0c26799949dc (loadbalancer) has been started and output is visible here. 2026-03-31 02:27:37.085356 | orchestrator | 2026-03-31 02:27:37.085529 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-31 02:27:37.085548 | orchestrator | 2026-03-31 02:27:37.085556 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-31 02:27:37.085564 | orchestrator | Tuesday 31 March 2026 02:27:27 +0000 (0:00:00.287) 0:00:00.287 ********* 2026-03-31 02:27:37.085588 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:27:37.085597 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:27:37.085604 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:27:37.085611 | orchestrator | 2026-03-31 02:27:37.085617 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-31 02:27:37.085624 | orchestrator | Tuesday 31 March 2026 02:27:27 +0000 (0:00:00.319) 0:00:00.606 ********* 2026-03-31 02:27:37.085632 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-03-31 02:27:37.085639 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-03-31 02:27:37.085646 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-03-31 02:27:37.085652 | orchestrator | 2026-03-31 02:27:37.085659 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-03-31 02:27:37.085666 | orchestrator | 2026-03-31 02:27:37.085672 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-31 02:27:37.085686 | orchestrator | Tuesday 31 March 2026 02:27:28 +0000 (0:00:00.451) 0:00:01.058 ********* 2026-03-31 02:27:37.085693 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:27:37.085700 | orchestrator | 2026-03-31 02:27:37.085707 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-03-31 02:27:37.085714 | orchestrator | Tuesday 31 March 2026 02:27:28 +0000 (0:00:00.567) 0:00:01.626 ********* 2026-03-31 02:27:37.085720 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:27:37.085727 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:27:37.085733 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:27:37.085740 | orchestrator | 2026-03-31 02:27:37.085747 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-31 02:27:37.085753 | orchestrator | Tuesday 31 March 2026 02:27:29 +0000 (0:00:00.635) 0:00:02.261 ********* 2026-03-31 02:27:37.085760 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:27:37.085766 | orchestrator | 2026-03-31 02:27:37.085773 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-03-31 02:27:37.085779 | orchestrator | Tuesday 31 March 2026 02:27:29 +0000 (0:00:00.684) 0:00:02.945 ********* 2026-03-31 02:27:37.085786 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:27:37.085792 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:27:37.085799 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:27:37.085806 | orchestrator | 2026-03-31 02:27:37.085812 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-03-31 02:27:37.085819 | orchestrator | Tuesday 31 March 2026 02:27:30 +0000 (0:00:00.614) 0:00:03.560 ********* 2026-03-31 02:27:37.085826 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-31 02:27:37.085833 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-31 02:27:37.085839 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-31 02:27:37.085846 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-31 02:27:37.085852 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-31 02:27:37.085859 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-31 02:27:37.085865 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-31 02:27:37.085873 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-31 02:27:37.085879 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-31 02:27:37.085886 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-31 02:27:37.085898 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-31 02:27:37.085904 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-31 02:27:37.085911 | orchestrator | 2026-03-31 02:27:37.085917 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-31 02:27:37.085924 | orchestrator | Tuesday 31 March 2026 02:27:32 +0000 (0:00:02.157) 0:00:05.718 ********* 2026-03-31 02:27:37.085931 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-31 02:27:37.085938 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-31 02:27:37.085944 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-31 02:27:37.085951 | orchestrator | 2026-03-31 02:27:37.085958 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-31 02:27:37.085965 | orchestrator | Tuesday 31 March 2026 02:27:33 +0000 (0:00:00.706) 0:00:06.425 ********* 2026-03-31 02:27:37.085971 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-31 02:27:37.085978 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-31 02:27:37.085985 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-31 02:27:37.085991 | orchestrator | 2026-03-31 02:27:37.085998 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-31 02:27:37.086004 | orchestrator | Tuesday 31 March 2026 02:27:34 +0000 (0:00:01.297) 0:00:07.722 ********* 2026-03-31 02:27:37.086011 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-03-31 02:27:37.086082 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:27:37.086104 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-03-31 02:27:37.086111 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:27:37.086118 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-03-31 02:27:37.086125 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:27:37.086131 | orchestrator | 2026-03-31 02:27:37.086138 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-03-31 02:27:37.086145 | orchestrator | Tuesday 31 March 2026 02:27:35 +0000 (0:00:00.532) 0:00:08.255 ********* 2026-03-31 02:27:37.086158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-31 02:27:37.086172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-31 02:27:37.086179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-31 02:27:37.086192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-31 02:27:37.086200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-31 02:27:37.086246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-31 02:27:42.442982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-31 02:27:42.443125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-31 02:27:42.443152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-31 02:27:42.443171 | orchestrator | 2026-03-31 02:27:42.443192 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-03-31 02:27:42.443210 | orchestrator | Tuesday 31 March 2026 02:27:37 +0000 (0:00:01.784) 0:00:10.040 ********* 2026-03-31 02:27:42.443228 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:27:42.443276 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:27:42.443294 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:27:42.443308 | orchestrator | 2026-03-31 02:27:42.443322 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-03-31 02:27:42.443338 | orchestrator | Tuesday 31 March 2026 02:27:37 +0000 (0:00:00.899) 0:00:10.939 ********* 2026-03-31 02:27:42.443354 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-03-31 02:27:42.443370 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-03-31 02:27:42.443386 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-03-31 02:27:42.443401 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-03-31 02:27:42.443418 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-03-31 02:27:42.443464 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-03-31 02:27:42.443484 | orchestrator | 2026-03-31 02:27:42.443501 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-03-31 02:27:42.443518 | orchestrator | Tuesday 31 March 2026 02:27:39 +0000 (0:00:01.515) 0:00:12.454 ********* 2026-03-31 02:27:42.443534 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:27:42.443551 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:27:42.443567 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:27:42.443583 | orchestrator | 2026-03-31 02:27:42.443600 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-03-31 02:27:42.443616 | orchestrator | Tuesday 31 March 2026 02:27:40 +0000 (0:00:00.896) 0:00:13.351 ********* 2026-03-31 02:27:42.443634 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:27:42.443651 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:27:42.443669 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:27:42.443687 | orchestrator | 2026-03-31 02:27:42.443702 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-03-31 02:27:42.443719 | orchestrator | Tuesday 31 March 2026 02:27:41 +0000 (0:00:01.422) 0:00:14.773 ********* 2026-03-31 02:27:42.443738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-31 02:27:42.443785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 02:27:42.443806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 02:27:42.443827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d0ed0f2177c61ef5845723177b110301197e06ce', '__omit_place_holder__d0ed0f2177c61ef5845723177b110301197e06ce'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-31 02:27:42.443862 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:27:42.443880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-31 02:27:42.443946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 02:27:42.443967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 02:27:42.443984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d0ed0f2177c61ef5845723177b110301197e06ce', '__omit_place_holder__d0ed0f2177c61ef5845723177b110301197e06ce'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-31 02:27:42.444000 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:27:42.444029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-31 02:27:45.360352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 02:27:45.360633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 02:27:45.360671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d0ed0f2177c61ef5845723177b110301197e06ce', '__omit_place_holder__d0ed0f2177c61ef5845723177b110301197e06ce'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-31 02:27:45.360691 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:27:45.360713 | orchestrator | 2026-03-31 02:27:45.360726 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-03-31 02:27:45.360738 | orchestrator | Tuesday 31 March 2026 02:27:42 +0000 (0:00:00.628) 0:00:15.401 ********* 2026-03-31 02:27:45.360750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-31 02:27:45.360763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-31 02:27:45.360774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-31 02:27:45.360835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-31 02:27:45.360852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 02:27:45.360865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d0ed0f2177c61ef5845723177b110301197e06ce', '__omit_place_holder__d0ed0f2177c61ef5845723177b110301197e06ce'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-31 02:27:45.360878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-31 02:27:45.360892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 02:27:45.360904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d0ed0f2177c61ef5845723177b110301197e06ce', '__omit_place_holder__d0ed0f2177c61ef5845723177b110301197e06ce'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-31 02:27:45.360950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-31 02:27:53.870537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 02:27:53.870650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d0ed0f2177c61ef5845723177b110301197e06ce', '__omit_place_holder__d0ed0f2177c61ef5845723177b110301197e06ce'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-31 02:27:53.870668 | orchestrator | 2026-03-31 02:27:53.870682 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-03-31 02:27:53.870695 | orchestrator | Tuesday 31 March 2026 02:27:45 +0000 (0:00:02.917) 0:00:18.318 ********* 2026-03-31 02:27:53.870707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-31 02:27:53.870720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-31 02:27:53.870731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-31 02:27:53.870769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-31 02:27:53.870817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-31 02:27:53.870831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-31 02:27:53.870843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-31 02:27:53.870854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-31 02:27:53.870865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-31 02:27:53.870881 | orchestrator | 2026-03-31 02:27:53.870900 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-03-31 02:27:53.870919 | orchestrator | Tuesday 31 March 2026 02:27:48 +0000 (0:00:03.160) 0:00:21.479 ********* 2026-03-31 02:27:53.870950 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-31 02:27:53.870971 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-31 02:27:53.870992 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-31 02:27:53.871011 | orchestrator | 2026-03-31 02:27:53.871032 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-03-31 02:27:53.871046 | orchestrator | Tuesday 31 March 2026 02:27:50 +0000 (0:00:01.875) 0:00:23.355 ********* 2026-03-31 02:27:53.871058 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-31 02:27:53.871070 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-31 02:27:53.871082 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-31 02:27:53.871095 | orchestrator | 2026-03-31 02:27:53.871107 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-03-31 02:27:53.871119 | orchestrator | Tuesday 31 March 2026 02:27:53 +0000 (0:00:02.929) 0:00:26.284 ********* 2026-03-31 02:27:53.871131 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:27:53.871146 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:27:53.871158 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:27:53.871171 | orchestrator | 2026-03-31 02:27:53.871194 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-03-31 02:28:05.657378 | orchestrator | Tuesday 31 March 2026 02:27:53 +0000 (0:00:00.551) 0:00:26.835 ********* 2026-03-31 02:28:05.657585 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-31 02:28:05.657621 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-31 02:28:05.657633 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-31 02:28:05.657645 | orchestrator | 2026-03-31 02:28:05.657657 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-03-31 02:28:05.657669 | orchestrator | Tuesday 31 March 2026 02:27:55 +0000 (0:00:02.044) 0:00:28.880 ********* 2026-03-31 02:28:05.657681 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-31 02:28:05.657692 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-31 02:28:05.657703 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-31 02:28:05.657714 | orchestrator | 2026-03-31 02:28:05.657725 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-03-31 02:28:05.657736 | orchestrator | Tuesday 31 March 2026 02:27:57 +0000 (0:00:02.023) 0:00:30.903 ********* 2026-03-31 02:28:05.657833 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-03-31 02:28:05.657850 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-03-31 02:28:05.657861 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-03-31 02:28:05.657872 | orchestrator | 2026-03-31 02:28:05.657898 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-03-31 02:28:05.657912 | orchestrator | Tuesday 31 March 2026 02:27:59 +0000 (0:00:01.448) 0:00:32.351 ********* 2026-03-31 02:28:05.657926 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-03-31 02:28:05.657938 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-03-31 02:28:05.657951 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-03-31 02:28:05.657963 | orchestrator | 2026-03-31 02:28:05.658003 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-31 02:28:05.658081 | orchestrator | Tuesday 31 March 2026 02:28:00 +0000 (0:00:01.505) 0:00:33.856 ********* 2026-03-31 02:28:05.658095 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:28:05.658106 | orchestrator | 2026-03-31 02:28:05.658117 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-03-31 02:28:05.658128 | orchestrator | Tuesday 31 March 2026 02:28:01 +0000 (0:00:00.597) 0:00:34.454 ********* 2026-03-31 02:28:05.658141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-31 02:28:05.658157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-31 02:28:05.658174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-31 02:28:05.658211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-31 02:28:05.658233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-31 02:28:05.658257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-31 02:28:05.658302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-31 02:28:05.658322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-31 02:28:05.658342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-31 02:28:05.658358 | orchestrator | 2026-03-31 02:28:05.658375 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-03-31 02:28:05.658393 | orchestrator | Tuesday 31 March 2026 02:28:04 +0000 (0:00:03.494) 0:00:37.949 ********* 2026-03-31 02:28:05.658435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-31 02:28:06.448003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 02:28:06.448126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 02:28:06.448163 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:28:06.448176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-31 02:28:06.448186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 02:28:06.448195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 02:28:06.448204 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:28:06.448213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-31 02:28:06.448259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 02:28:06.448272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 02:28:06.448299 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:28:06.448317 | orchestrator | 2026-03-31 02:28:06.448327 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-03-31 02:28:06.448338 | orchestrator | Tuesday 31 March 2026 02:28:05 +0000 (0:00:00.669) 0:00:38.618 ********* 2026-03-31 02:28:06.448348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-31 02:28:06.448357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 02:28:06.448367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 02:28:06.448376 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:28:06.448385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-31 02:28:06.448404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 02:28:07.283679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 02:28:07.283783 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:28:07.283794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-31 02:28:07.283803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 02:28:07.283810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 02:28:07.283816 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:28:07.283822 | orchestrator | 2026-03-31 02:28:07.283830 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-31 02:28:07.283837 | orchestrator | Tuesday 31 March 2026 02:28:06 +0000 (0:00:00.791) 0:00:39.409 ********* 2026-03-31 02:28:07.283844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-31 02:28:07.283851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 02:28:07.283870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 02:28:07.283882 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:28:07.283889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-31 02:28:07.283895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 02:28:07.283902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 02:28:07.283908 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:28:07.283915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-31 02:28:07.283957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 02:28:07.283968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 02:28:07.283985 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:28:08.759347 | orchestrator | 2026-03-31 02:28:08.759442 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-31 02:28:08.759455 | orchestrator | Tuesday 31 March 2026 02:28:07 +0000 (0:00:00.830) 0:00:40.240 ********* 2026-03-31 02:28:08.759504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-31 02:28:08.759519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 02:28:08.759529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 02:28:08.759538 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:28:08.759548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-31 02:28:08.759557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 02:28:08.759582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 02:28:08.759610 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:28:08.759636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-31 02:28:08.759646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 02:28:08.759654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 02:28:08.759667 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:28:08.759680 | orchestrator | 2026-03-31 02:28:08.759694 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-31 02:28:08.759707 | orchestrator | Tuesday 31 March 2026 02:28:07 +0000 (0:00:00.611) 0:00:40.851 ********* 2026-03-31 02:28:08.759720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-31 02:28:08.759733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 02:28:08.759766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 02:28:08.759780 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:28:08.759804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-31 02:28:09.982570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 02:28:09.982679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 02:28:09.982696 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:28:09.982710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-31 02:28:09.982723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 02:28:09.982734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 02:28:09.982770 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:28:09.982782 | orchestrator | 2026-03-31 02:28:09.982794 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-03-31 02:28:09.982807 | orchestrator | Tuesday 31 March 2026 02:28:08 +0000 (0:00:00.869) 0:00:41.721 ********* 2026-03-31 02:28:09.982833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-31 02:28:09.982870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 02:28:09.982890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 02:28:09.982909 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:28:09.982928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-31 02:28:09.982947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 02:28:09.982971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 02:28:09.982982 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:28:09.982999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-31 02:28:09.983018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 02:28:11.415220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 02:28:11.415323 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:28:11.415337 | orchestrator | 2026-03-31 02:28:11.415348 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-03-31 02:28:11.415358 | orchestrator | Tuesday 31 March 2026 02:28:09 +0000 (0:00:01.218) 0:00:42.940 ********* 2026-03-31 02:28:11.415369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-31 02:28:11.415380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 02:28:11.415412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 02:28:11.415421 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:28:11.415431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-31 02:28:11.415455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 02:28:11.415503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 02:28:11.415513 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:28:11.415522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-31 02:28:11.415531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 02:28:11.415547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 02:28:11.415556 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:28:11.415564 | orchestrator | 2026-03-31 02:28:11.415573 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-03-31 02:28:11.415581 | orchestrator | Tuesday 31 March 2026 02:28:10 +0000 (0:00:00.618) 0:00:43.558 ********* 2026-03-31 02:28:11.415590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-31 02:28:11.415599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 02:28:11.415621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 02:28:18.160936 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:28:18.161035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-31 02:28:18.161052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 02:28:18.161087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 02:28:18.161099 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:28:18.161110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-31 02:28:18.161134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 02:28:18.161145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 02:28:18.161155 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:28:18.161165 | orchestrator | 2026-03-31 02:28:18.161177 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-03-31 02:28:18.161187 | orchestrator | Tuesday 31 March 2026 02:28:11 +0000 (0:00:00.818) 0:00:44.376 ********* 2026-03-31 02:28:18.161197 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-31 02:28:18.161223 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-31 02:28:18.161234 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-31 02:28:18.161244 | orchestrator | 2026-03-31 02:28:18.161253 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-03-31 02:28:18.161263 | orchestrator | Tuesday 31 March 2026 02:28:13 +0000 (0:00:01.754) 0:00:46.131 ********* 2026-03-31 02:28:18.161274 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-31 02:28:18.161283 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-31 02:28:18.161293 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-31 02:28:18.161303 | orchestrator | 2026-03-31 02:28:18.161320 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-03-31 02:28:18.161330 | orchestrator | Tuesday 31 March 2026 02:28:14 +0000 (0:00:01.729) 0:00:47.861 ********* 2026-03-31 02:28:18.161339 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-31 02:28:18.161349 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-31 02:28:18.161359 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-31 02:28:18.161368 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:28:18.161378 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-31 02:28:18.161388 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-31 02:28:18.161397 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:28:18.161407 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-31 02:28:18.161416 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:28:18.161426 | orchestrator | 2026-03-31 02:28:18.161435 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-03-31 02:28:18.161444 | orchestrator | Tuesday 31 March 2026 02:28:15 +0000 (0:00:00.901) 0:00:48.763 ********* 2026-03-31 02:28:18.161454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-31 02:28:18.161466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-31 02:28:18.161523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-31 02:28:18.161557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-31 02:28:22.279371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-31 02:28:22.279474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-31 02:28:22.279568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-31 02:28:22.279585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-31 02:28:22.279600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-31 02:28:22.279616 | orchestrator | 2026-03-31 02:28:22.279650 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-03-31 02:28:22.279665 | orchestrator | Tuesday 31 March 2026 02:28:18 +0000 (0:00:02.358) 0:00:51.121 ********* 2026-03-31 02:28:22.279681 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:28:22.279697 | orchestrator | 2026-03-31 02:28:22.279713 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-03-31 02:28:22.279727 | orchestrator | Tuesday 31 March 2026 02:28:18 +0000 (0:00:00.798) 0:00:51.919 ********* 2026-03-31 02:28:22.279761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-31 02:28:22.279803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-31 02:28:22.279818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-31 02:28:22.279833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-31 02:28:22.279849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-31 02:28:22.279870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-31 02:28:22.279892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-31 02:28:22.996026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-31 02:28:22.996133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-31 02:28:22.996148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-31 02:28:22.996161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-31 02:28:22.996189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-31 02:28:22.996201 | orchestrator | 2026-03-31 02:28:22.996215 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-03-31 02:28:22.996227 | orchestrator | Tuesday 31 March 2026 02:28:22 +0000 (0:00:03.318) 0:00:55.237 ********* 2026-03-31 02:28:22.996240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-31 02:28:22.996292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-31 02:28:22.996307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-31 02:28:22.996319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-31 02:28:22.996330 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:28:22.996343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-31 02:28:22.996359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-31 02:28:22.996379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-31 02:28:22.996400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-31 02:28:32.190224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-31 02:28:32.190347 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:28:32.190369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-31 02:28:32.190385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-31 02:28:32.190400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-31 02:28:32.190440 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:28:32.190450 | orchestrator | 2026-03-31 02:28:32.190458 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-03-31 02:28:32.190467 | orchestrator | Tuesday 31 March 2026 02:28:22 +0000 (0:00:00.718) 0:00:55.956 ********* 2026-03-31 02:28:32.190475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-31 02:28:32.190486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-31 02:28:32.190521 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:28:32.190543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-31 02:28:32.190551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-31 02:28:32.190558 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:28:32.190566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-31 02:28:32.190590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-31 02:28:32.190598 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:28:32.190605 | orchestrator | 2026-03-31 02:28:32.190613 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-03-31 02:28:32.190620 | orchestrator | Tuesday 31 March 2026 02:28:24 +0000 (0:00:01.162) 0:00:57.119 ********* 2026-03-31 02:28:32.190628 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:28:32.190636 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:28:32.190644 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:28:32.190653 | orchestrator | 2026-03-31 02:28:32.190662 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-03-31 02:28:32.190670 | orchestrator | Tuesday 31 March 2026 02:28:25 +0000 (0:00:01.312) 0:00:58.432 ********* 2026-03-31 02:28:32.190678 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:28:32.190686 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:28:32.190694 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:28:32.190702 | orchestrator | 2026-03-31 02:28:32.190710 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-03-31 02:28:32.190718 | orchestrator | Tuesday 31 March 2026 02:28:27 +0000 (0:00:02.265) 0:01:00.697 ********* 2026-03-31 02:28:32.190726 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:28:32.190735 | orchestrator | 2026-03-31 02:28:32.190743 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-03-31 02:28:32.190752 | orchestrator | Tuesday 31 March 2026 02:28:28 +0000 (0:00:00.697) 0:01:01.394 ********* 2026-03-31 02:28:32.190762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-31 02:28:32.190784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-31 02:28:32.190794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-31 02:28:32.190814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-31 02:28:32.811947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-31 02:28:32.812038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-31 02:28:32.812072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-31 02:28:32.812095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-31 02:28:32.812104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-31 02:28:32.812113 | orchestrator | 2026-03-31 02:28:32.812122 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-03-31 02:28:32.812132 | orchestrator | Tuesday 31 March 2026 02:28:32 +0000 (0:00:03.747) 0:01:05.142 ********* 2026-03-31 02:28:32.812157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-31 02:28:32.812166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-31 02:28:32.812181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-31 02:28:32.812189 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:28:32.812203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-31 02:28:32.812212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-31 02:28:32.812221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-31 02:28:32.812229 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:28:32.812244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-31 02:28:42.747622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-31 02:28:42.747705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-31 02:28:42.747713 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:28:42.747719 | orchestrator | 2026-03-31 02:28:42.747725 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-03-31 02:28:42.747731 | orchestrator | Tuesday 31 March 2026 02:28:32 +0000 (0:00:00.624) 0:01:05.767 ********* 2026-03-31 02:28:42.747747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-31 02:28:42.747754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-31 02:28:42.747760 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:28:42.747765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-31 02:28:42.747769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-31 02:28:42.747774 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:28:42.747778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-31 02:28:42.747782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-31 02:28:42.747787 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:28:42.747791 | orchestrator | 2026-03-31 02:28:42.747796 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-03-31 02:28:42.747800 | orchestrator | Tuesday 31 March 2026 02:28:33 +0000 (0:00:00.894) 0:01:06.661 ********* 2026-03-31 02:28:42.747805 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:28:42.747810 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:28:42.747814 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:28:42.747818 | orchestrator | 2026-03-31 02:28:42.747823 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-03-31 02:28:42.747827 | orchestrator | Tuesday 31 March 2026 02:28:35 +0000 (0:00:01.556) 0:01:08.218 ********* 2026-03-31 02:28:42.747846 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:28:42.747851 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:28:42.747855 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:28:42.747860 | orchestrator | 2026-03-31 02:28:42.747864 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-03-31 02:28:42.747869 | orchestrator | Tuesday 31 March 2026 02:28:37 +0000 (0:00:02.059) 0:01:10.278 ********* 2026-03-31 02:28:42.747873 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:28:42.747877 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:28:42.747882 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:28:42.747886 | orchestrator | 2026-03-31 02:28:42.747890 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-03-31 02:28:42.747895 | orchestrator | Tuesday 31 March 2026 02:28:37 +0000 (0:00:00.334) 0:01:10.613 ********* 2026-03-31 02:28:42.747899 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:28:42.747903 | orchestrator | 2026-03-31 02:28:42.747908 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-03-31 02:28:42.747922 | orchestrator | Tuesday 31 March 2026 02:28:38 +0000 (0:00:00.757) 0:01:11.370 ********* 2026-03-31 02:28:42.747929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-31 02:28:42.747938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-31 02:28:42.747943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-31 02:28:42.747947 | orchestrator | 2026-03-31 02:28:42.747952 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-03-31 02:28:42.747957 | orchestrator | Tuesday 31 March 2026 02:28:41 +0000 (0:00:02.920) 0:01:14.290 ********* 2026-03-31 02:28:42.747966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-31 02:28:42.747970 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:28:42.747979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-31 02:28:50.714467 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:28:50.714609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-31 02:28:50.714619 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:28:50.714624 | orchestrator | 2026-03-31 02:28:50.714629 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-03-31 02:28:50.714634 | orchestrator | Tuesday 31 March 2026 02:28:42 +0000 (0:00:01.420) 0:01:15.711 ********* 2026-03-31 02:28:50.714651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-31 02:28:50.714658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-31 02:28:50.714664 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:28:50.714668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-31 02:28:50.714684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-31 02:28:50.714688 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:28:50.714692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-31 02:28:50.714696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-31 02:28:50.714700 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:28:50.714703 | orchestrator | 2026-03-31 02:28:50.714707 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-03-31 02:28:50.714711 | orchestrator | Tuesday 31 March 2026 02:28:44 +0000 (0:00:01.730) 0:01:17.442 ********* 2026-03-31 02:28:50.714715 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:28:50.714718 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:28:50.714722 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:28:50.714726 | orchestrator | 2026-03-31 02:28:50.714732 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-03-31 02:28:50.714748 | orchestrator | Tuesday 31 March 2026 02:28:44 +0000 (0:00:00.442) 0:01:17.884 ********* 2026-03-31 02:28:50.714753 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:28:50.714756 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:28:50.714760 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:28:50.714764 | orchestrator | 2026-03-31 02:28:50.714767 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-03-31 02:28:50.714771 | orchestrator | Tuesday 31 March 2026 02:28:46 +0000 (0:00:01.374) 0:01:19.259 ********* 2026-03-31 02:28:50.714775 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:28:50.714779 | orchestrator | 2026-03-31 02:28:50.714783 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-03-31 02:28:50.714786 | orchestrator | Tuesday 31 March 2026 02:28:47 +0000 (0:00:01.025) 0:01:20.284 ********* 2026-03-31 02:28:50.714794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-31 02:28:50.714803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 02:28:50.714808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-31 02:28:50.714814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-31 02:28:50.714822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-31 02:28:51.449036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-31 02:28:51.449188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 02:28:51.449204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-31 02:28:51.449220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 02:28:51.449235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-31 02:28:51.449267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-31 02:28:51.449290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-31 02:28:51.449314 | orchestrator | 2026-03-31 02:28:51.449325 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-03-31 02:28:51.449334 | orchestrator | Tuesday 31 March 2026 02:28:50 +0000 (0:00:03.539) 0:01:23.824 ********* 2026-03-31 02:28:51.449344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-31 02:28:51.449354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-31 02:28:51.449362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 02:28:51.449377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 02:28:56.395624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-31 02:28:56.395776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-31 02:28:56.395796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-31 02:28:56.395809 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:28:56.395823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-31 02:28:56.395835 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:28:56.395847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-31 02:28:56.395880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 02:28:56.395917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-31 02:28:56.395930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-31 02:28:56.395941 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:28:56.395953 | orchestrator | 2026-03-31 02:28:56.395965 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-03-31 02:28:56.395977 | orchestrator | Tuesday 31 March 2026 02:28:51 +0000 (0:00:00.698) 0:01:24.523 ********* 2026-03-31 02:28:56.395989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-31 02:28:56.396003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-31 02:28:56.396015 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:28:56.396026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-31 02:28:56.396038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-31 02:28:56.396048 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:28:56.396059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-31 02:28:56.396101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-31 02:28:56.396114 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:28:56.396124 | orchestrator | 2026-03-31 02:28:56.396135 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-03-31 02:28:56.396146 | orchestrator | Tuesday 31 March 2026 02:28:52 +0000 (0:00:01.199) 0:01:25.722 ********* 2026-03-31 02:28:56.396157 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:28:56.396177 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:28:56.396187 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:28:56.396198 | orchestrator | 2026-03-31 02:28:56.396209 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-03-31 02:28:56.396220 | orchestrator | Tuesday 31 March 2026 02:28:54 +0000 (0:00:01.323) 0:01:27.046 ********* 2026-03-31 02:28:56.396230 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:28:56.396242 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:28:56.396253 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:28:56.396264 | orchestrator | 2026-03-31 02:28:56.396275 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-03-31 02:28:56.396294 | orchestrator | Tuesday 31 March 2026 02:28:56 +0000 (0:00:02.306) 0:01:29.352 ********* 2026-03-31 02:29:01.633560 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:29:01.633661 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:29:01.633679 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:29:01.633693 | orchestrator | 2026-03-31 02:29:01.633708 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-03-31 02:29:01.633745 | orchestrator | Tuesday 31 March 2026 02:28:56 +0000 (0:00:00.362) 0:01:29.714 ********* 2026-03-31 02:29:01.633754 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:29:01.633762 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:29:01.633770 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:29:01.633778 | orchestrator | 2026-03-31 02:29:01.633786 | orchestrator | TASK [include_role : designate] ************************************************ 2026-03-31 02:29:01.633794 | orchestrator | Tuesday 31 March 2026 02:28:57 +0000 (0:00:00.346) 0:01:30.061 ********* 2026-03-31 02:29:01.633802 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:29:01.633810 | orchestrator | 2026-03-31 02:29:01.633818 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-03-31 02:29:01.633840 | orchestrator | Tuesday 31 March 2026 02:28:58 +0000 (0:00:01.036) 0:01:31.097 ********* 2026-03-31 02:29:01.633853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-31 02:29:01.633865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-31 02:29:01.633877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-31 02:29:01.633914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-31 02:29:01.633941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-31 02:29:01.633955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-31 02:29:01.633963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-31 02:29:01.633972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-31 02:29:01.633988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-31 02:29:01.634076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-31 02:29:01.634090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-31 02:29:01.634107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-31 02:29:02.605875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-31 02:29:02.605965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-31 02:29:02.605976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-31 02:29:02.606001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-31 02:29:02.606008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-31 02:29:02.606057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-31 02:29:02.606086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-31 02:29:02.606094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-31 02:29:02.606101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-31 02:29:02.606113 | orchestrator | 2026-03-31 02:29:02.606122 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-03-31 02:29:02.606130 | orchestrator | Tuesday 31 March 2026 02:29:01 +0000 (0:00:03.835) 0:01:34.933 ********* 2026-03-31 02:29:02.606136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-31 02:29:02.606143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-31 02:29:02.606149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-31 02:29:02.606161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-31 02:29:03.122713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-31 02:29:03.122819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-31 02:29:03.122860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-31 02:29:03.122875 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:29:03.122897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-31 02:29:03.122928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-31 02:29:03.123723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-31 02:29:03.123810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-31 02:29:03.123836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-31 02:29:03.123877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-31 02:29:03.123906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-31 02:29:03.123927 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:29:03.123950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-31 02:29:03.123971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-31 02:29:03.124005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-31 02:29:13.713716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-31 02:29:13.713823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-31 02:29:13.713851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-31 02:29:13.713863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-31 02:29:13.713880 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:29:13.713897 | orchestrator | 2026-03-31 02:29:13.713913 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-03-31 02:29:13.713931 | orchestrator | Tuesday 31 March 2026 02:29:03 +0000 (0:00:01.150) 0:01:36.084 ********* 2026-03-31 02:29:13.713946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-31 02:29:13.713963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-31 02:29:13.713974 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:29:13.713983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-31 02:29:13.713993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-31 02:29:13.714001 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:29:13.714010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-31 02:29:13.714092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-31 02:29:13.714103 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:29:13.714112 | orchestrator | 2026-03-31 02:29:13.714120 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-03-31 02:29:13.714144 | orchestrator | Tuesday 31 March 2026 02:29:04 +0000 (0:00:01.393) 0:01:37.477 ********* 2026-03-31 02:29:13.714154 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:29:13.714163 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:29:13.714172 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:29:13.714180 | orchestrator | 2026-03-31 02:29:13.714189 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-03-31 02:29:13.714198 | orchestrator | Tuesday 31 March 2026 02:29:05 +0000 (0:00:01.299) 0:01:38.776 ********* 2026-03-31 02:29:13.714206 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:29:13.714215 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:29:13.714223 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:29:13.714232 | orchestrator | 2026-03-31 02:29:13.714240 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-03-31 02:29:13.714249 | orchestrator | Tuesday 31 March 2026 02:29:07 +0000 (0:00:02.057) 0:01:40.833 ********* 2026-03-31 02:29:13.714257 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:29:13.714266 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:29:13.714275 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:29:13.714285 | orchestrator | 2026-03-31 02:29:13.714295 | orchestrator | TASK [include_role : glance] *************************************************** 2026-03-31 02:29:13.714305 | orchestrator | Tuesday 31 March 2026 02:29:08 +0000 (0:00:00.324) 0:01:41.158 ********* 2026-03-31 02:29:13.714315 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:29:13.714325 | orchestrator | 2026-03-31 02:29:13.714334 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-03-31 02:29:13.714344 | orchestrator | Tuesday 31 March 2026 02:29:09 +0000 (0:00:01.389) 0:01:42.547 ********* 2026-03-31 02:29:13.714365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-31 02:29:13.714386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-31 02:29:17.077634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-31 02:29:17.077753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-31 02:29:17.077835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-31 02:29:17.077851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-31 02:29:17.077873 | orchestrator | 2026-03-31 02:29:17.077886 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-03-31 02:29:17.077923 | orchestrator | Tuesday 31 March 2026 02:29:13 +0000 (0:00:04.289) 0:01:46.837 ********* 2026-03-31 02:29:17.077953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-31 02:29:17.173263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-31 02:29:17.173388 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:29:17.173405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-31 02:29:17.173450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-31 02:29:17.173471 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:29:17.173482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-31 02:29:17.173508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-31 02:29:29.233657 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:29:29.233799 | orchestrator | 2026-03-31 02:29:29.233826 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-03-31 02:29:29.233850 | orchestrator | Tuesday 31 March 2026 02:29:17 +0000 (0:00:03.301) 0:01:50.138 ********* 2026-03-31 02:29:29.233876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-31 02:29:29.233904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-31 02:29:29.233926 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:29:29.233948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-31 02:29:29.233969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-31 02:29:29.233989 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:29:29.234011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-31 02:29:29.234133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-31 02:29:29.234156 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:29:29.234178 | orchestrator | 2026-03-31 02:29:29.234197 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-03-31 02:29:29.234217 | orchestrator | Tuesday 31 March 2026 02:29:20 +0000 (0:00:03.728) 0:01:53.866 ********* 2026-03-31 02:29:29.234270 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:29:29.234288 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:29:29.234300 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:29:29.234313 | orchestrator | 2026-03-31 02:29:29.234324 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-03-31 02:29:29.234336 | orchestrator | Tuesday 31 March 2026 02:29:22 +0000 (0:00:01.476) 0:01:55.343 ********* 2026-03-31 02:29:29.234346 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:29:29.234357 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:29:29.234368 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:29:29.234379 | orchestrator | 2026-03-31 02:29:29.234390 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-03-31 02:29:29.234422 | orchestrator | Tuesday 31 March 2026 02:29:24 +0000 (0:00:02.147) 0:01:57.490 ********* 2026-03-31 02:29:29.234434 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:29:29.234445 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:29:29.234455 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:29:29.234466 | orchestrator | 2026-03-31 02:29:29.234477 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-03-31 02:29:29.234488 | orchestrator | Tuesday 31 March 2026 02:29:24 +0000 (0:00:00.328) 0:01:57.819 ********* 2026-03-31 02:29:29.234499 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:29:29.234509 | orchestrator | 2026-03-31 02:29:29.234520 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-03-31 02:29:29.234531 | orchestrator | Tuesday 31 March 2026 02:29:25 +0000 (0:00:01.047) 0:01:58.866 ********* 2026-03-31 02:29:29.234543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-31 02:29:29.234584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-31 02:29:29.234596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-31 02:29:29.234607 | orchestrator | 2026-03-31 02:29:29.234618 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-03-31 02:29:29.234640 | orchestrator | Tuesday 31 March 2026 02:29:28 +0000 (0:00:03.069) 0:02:01.936 ********* 2026-03-31 02:29:29.234652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-31 02:29:29.234665 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:29:29.234684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-31 02:29:38.452905 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:29:38.453012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-31 02:29:38.453099 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:29:38.453118 | orchestrator | 2026-03-31 02:29:38.453129 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-03-31 02:29:38.453140 | orchestrator | Tuesday 31 March 2026 02:29:29 +0000 (0:00:00.489) 0:02:02.426 ********* 2026-03-31 02:29:38.453151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-31 02:29:38.453163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-31 02:29:38.453174 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:29:38.453184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-31 02:29:38.453193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-31 02:29:38.453203 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:29:38.453213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-31 02:29:38.453222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-31 02:29:38.453251 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:29:38.453261 | orchestrator | 2026-03-31 02:29:38.453271 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-03-31 02:29:38.453281 | orchestrator | Tuesday 31 March 2026 02:29:30 +0000 (0:00:00.964) 0:02:03.391 ********* 2026-03-31 02:29:38.453291 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:29:38.453300 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:29:38.453309 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:29:38.453319 | orchestrator | 2026-03-31 02:29:38.453329 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-03-31 02:29:38.453338 | orchestrator | Tuesday 31 March 2026 02:29:31 +0000 (0:00:01.335) 0:02:04.726 ********* 2026-03-31 02:29:38.453348 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:29:38.453357 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:29:38.453367 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:29:38.453376 | orchestrator | 2026-03-31 02:29:38.453388 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-03-31 02:29:38.453403 | orchestrator | Tuesday 31 March 2026 02:29:33 +0000 (0:00:02.080) 0:02:06.806 ********* 2026-03-31 02:29:38.453414 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:29:38.453425 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:29:38.453436 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:29:38.453447 | orchestrator | 2026-03-31 02:29:38.453458 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-03-31 02:29:38.453469 | orchestrator | Tuesday 31 March 2026 02:29:34 +0000 (0:00:00.321) 0:02:07.128 ********* 2026-03-31 02:29:38.453480 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:29:38.453491 | orchestrator | 2026-03-31 02:29:38.453501 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-03-31 02:29:38.453510 | orchestrator | Tuesday 31 March 2026 02:29:35 +0000 (0:00:01.146) 0:02:08.274 ********* 2026-03-31 02:29:38.453545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-31 02:29:38.453603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-31 02:29:38.453627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-31 02:29:40.117357 | orchestrator | 2026-03-31 02:29:40.117470 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-03-31 02:29:40.117491 | orchestrator | Tuesday 31 March 2026 02:29:38 +0000 (0:00:03.137) 0:02:11.412 ********* 2026-03-31 02:29:40.117530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-31 02:29:40.117551 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:29:40.117639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-31 02:29:40.117763 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:29:40.117793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-31 02:29:40.117809 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:29:40.117823 | orchestrator | 2026-03-31 02:29:40.117838 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-03-31 02:29:40.117852 | orchestrator | Tuesday 31 March 2026 02:29:39 +0000 (0:00:00.679) 0:02:12.091 ********* 2026-03-31 02:29:40.117869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-31 02:29:40.117898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-31 02:29:40.117916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-31 02:29:40.117944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-31 02:29:49.057999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-31 02:29:49.058125 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:29:49.058138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-31 02:29:49.058149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-31 02:29:49.058171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-31 02:29:49.058180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-31 02:29:49.058188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-31 02:29:49.058194 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:29:49.058201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-31 02:29:49.058207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-31 02:29:49.058213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-31 02:29:49.058238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-31 02:29:49.058245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-31 02:29:49.058251 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:29:49.058257 | orchestrator | 2026-03-31 02:29:49.058265 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-03-31 02:29:49.058272 | orchestrator | Tuesday 31 March 2026 02:29:40 +0000 (0:00:00.986) 0:02:13.078 ********* 2026-03-31 02:29:49.058279 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:29:49.058285 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:29:49.058291 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:29:49.058297 | orchestrator | 2026-03-31 02:29:49.058303 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-03-31 02:29:49.058310 | orchestrator | Tuesday 31 March 2026 02:29:41 +0000 (0:00:01.633) 0:02:14.711 ********* 2026-03-31 02:29:49.058316 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:29:49.058322 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:29:49.058329 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:29:49.058335 | orchestrator | 2026-03-31 02:29:49.058341 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-03-31 02:29:49.058347 | orchestrator | Tuesday 31 March 2026 02:29:43 +0000 (0:00:02.043) 0:02:16.755 ********* 2026-03-31 02:29:49.058353 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:29:49.058359 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:29:49.058378 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:29:49.058384 | orchestrator | 2026-03-31 02:29:49.058391 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-03-31 02:29:49.058397 | orchestrator | Tuesday 31 March 2026 02:29:44 +0000 (0:00:00.309) 0:02:17.065 ********* 2026-03-31 02:29:49.058403 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:29:49.058409 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:29:49.058415 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:29:49.058422 | orchestrator | 2026-03-31 02:29:49.058428 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-03-31 02:29:49.058434 | orchestrator | Tuesday 31 March 2026 02:29:44 +0000 (0:00:00.331) 0:02:17.397 ********* 2026-03-31 02:29:49.058440 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:29:49.058446 | orchestrator | 2026-03-31 02:29:49.058452 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-03-31 02:29:49.058459 | orchestrator | Tuesday 31 March 2026 02:29:45 +0000 (0:00:01.228) 0:02:18.625 ********* 2026-03-31 02:29:49.058473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-31 02:29:49.058489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-31 02:29:49.058496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-31 02:29:49.058504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-31 02:29:49.058516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-31 02:29:49.701029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-31 02:29:49.701178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-31 02:29:49.701223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-31 02:29:49.701237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-31 02:29:49.701248 | orchestrator | 2026-03-31 02:29:49.701261 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-03-31 02:29:49.701274 | orchestrator | Tuesday 31 March 2026 02:29:49 +0000 (0:00:03.377) 0:02:22.002 ********* 2026-03-31 02:29:49.701307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-31 02:29:49.701328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-31 02:29:49.701340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-31 02:29:49.701359 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:29:49.701373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-31 02:29:49.701385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-31 02:29:49.701396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-31 02:29:49.701407 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:29:49.701432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-31 02:29:59.124784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-31 02:29:59.124879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-31 02:29:59.124890 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:29:59.124899 | orchestrator | 2026-03-31 02:29:59.124907 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-03-31 02:29:59.124915 | orchestrator | Tuesday 31 March 2026 02:29:49 +0000 (0:00:00.654) 0:02:22.657 ********* 2026-03-31 02:29:59.124924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-31 02:29:59.124932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-31 02:29:59.124940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-31 02:29:59.124946 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:29:59.124952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-31 02:29:59.124958 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:29:59.124964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-31 02:29:59.124971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-31 02:29:59.124977 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:29:59.124984 | orchestrator | 2026-03-31 02:29:59.124990 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-03-31 02:29:59.124997 | orchestrator | Tuesday 31 March 2026 02:29:50 +0000 (0:00:01.148) 0:02:23.805 ********* 2026-03-31 02:29:59.125030 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:29:59.125038 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:29:59.125064 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:29:59.125071 | orchestrator | 2026-03-31 02:29:59.125077 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-03-31 02:29:59.125084 | orchestrator | Tuesday 31 March 2026 02:29:52 +0000 (0:00:01.325) 0:02:25.130 ********* 2026-03-31 02:29:59.125091 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:29:59.125097 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:29:59.125104 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:29:59.125111 | orchestrator | 2026-03-31 02:29:59.125117 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-03-31 02:29:59.125124 | orchestrator | Tuesday 31 March 2026 02:29:54 +0000 (0:00:02.124) 0:02:27.255 ********* 2026-03-31 02:29:59.125131 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:29:59.125151 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:29:59.125158 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:29:59.125164 | orchestrator | 2026-03-31 02:29:59.125171 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-03-31 02:29:59.125193 | orchestrator | Tuesday 31 March 2026 02:29:54 +0000 (0:00:00.354) 0:02:27.610 ********* 2026-03-31 02:29:59.125200 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:29:59.125207 | orchestrator | 2026-03-31 02:29:59.125213 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-03-31 02:29:59.125219 | orchestrator | Tuesday 31 March 2026 02:29:55 +0000 (0:00:01.231) 0:02:28.841 ********* 2026-03-31 02:29:59.125228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-31 02:29:59.125238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-31 02:29:59.125247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-31 02:29:59.125259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-31 02:29:59.125274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-31 02:30:04.898554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-31 02:30:04.898746 | orchestrator | 2026-03-31 02:30:04.898767 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-03-31 02:30:04.898780 | orchestrator | Tuesday 31 March 2026 02:29:59 +0000 (0:00:03.245) 0:02:32.086 ********* 2026-03-31 02:30:04.898795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-31 02:30:04.898886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-31 02:30:04.898926 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:30:04.898946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-31 02:30:04.898979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-31 02:30:04.898992 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:30:04.899003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-31 02:30:04.899015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-31 02:30:04.899035 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:30:04.899046 | orchestrator | 2026-03-31 02:30:04.899058 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-03-31 02:30:04.899069 | orchestrator | Tuesday 31 March 2026 02:29:59 +0000 (0:00:00.695) 0:02:32.781 ********* 2026-03-31 02:30:04.899081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-31 02:30:04.899094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-31 02:30:04.899107 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:30:04.899118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-31 02:30:04.899129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-31 02:30:04.899140 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:30:04.899151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-31 02:30:04.899162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-31 02:30:04.899173 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:30:04.899184 | orchestrator | 2026-03-31 02:30:04.899206 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-03-31 02:30:04.899225 | orchestrator | Tuesday 31 March 2026 02:30:00 +0000 (0:00:01.012) 0:02:33.793 ********* 2026-03-31 02:30:04.899243 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:30:04.899260 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:30:04.899278 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:30:04.899295 | orchestrator | 2026-03-31 02:30:04.899313 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-03-31 02:30:04.899331 | orchestrator | Tuesday 31 March 2026 02:30:02 +0000 (0:00:01.768) 0:02:35.562 ********* 2026-03-31 02:30:04.899347 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:30:04.899365 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:30:04.899383 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:30:04.899401 | orchestrator | 2026-03-31 02:30:04.899420 | orchestrator | TASK [include_role : manila] *************************************************** 2026-03-31 02:30:04.899449 | orchestrator | Tuesday 31 March 2026 02:30:04 +0000 (0:00:02.288) 0:02:37.850 ********* 2026-03-31 02:30:09.547208 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:30:09.547308 | orchestrator | 2026-03-31 02:30:09.547323 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-03-31 02:30:09.547335 | orchestrator | Tuesday 31 March 2026 02:30:05 +0000 (0:00:01.103) 0:02:38.954 ********* 2026-03-31 02:30:09.547349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-31 02:30:09.547394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 02:30:09.547408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-31 02:30:09.547419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-31 02:30:09.547447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-31 02:30:09.547479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 02:30:09.547492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-31 02:30:09.547513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-31 02:30:09.547524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-31 02:30:09.547536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 02:30:09.547553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-31 02:30:09.547673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-31 02:30:10.590199 | orchestrator | 2026-03-31 02:30:10.590283 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-03-31 02:30:10.590294 | orchestrator | Tuesday 31 March 2026 02:30:09 +0000 (0:00:03.643) 0:02:42.598 ********* 2026-03-31 02:30:10.590322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-31 02:30:10.590333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 02:30:10.590339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-31 02:30:10.590344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-31 02:30:10.590349 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:30:10.590364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-31 02:30:10.590381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 02:30:10.590389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-31 02:30:10.590393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-31 02:30:10.590397 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:30:10.590401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-31 02:30:10.590405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 02:30:10.590412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-31 02:30:10.590420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-31 02:30:22.031601 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:30:22.031741 | orchestrator | 2026-03-31 02:30:22.031752 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-03-31 02:30:22.031763 | orchestrator | Tuesday 31 March 2026 02:30:10 +0000 (0:00:01.046) 0:02:43.645 ********* 2026-03-31 02:30:22.031771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-31 02:30:22.031782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-31 02:30:22.031790 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:30:22.031798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-31 02:30:22.031806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-31 02:30:22.031812 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:30:22.031819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-31 02:30:22.031826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-31 02:30:22.031832 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:30:22.031839 | orchestrator | 2026-03-31 02:30:22.031845 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-03-31 02:30:22.031852 | orchestrator | Tuesday 31 March 2026 02:30:11 +0000 (0:00:00.911) 0:02:44.556 ********* 2026-03-31 02:30:22.031859 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:30:22.031866 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:30:22.031873 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:30:22.031880 | orchestrator | 2026-03-31 02:30:22.031886 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-03-31 02:30:22.031894 | orchestrator | Tuesday 31 March 2026 02:30:12 +0000 (0:00:01.351) 0:02:45.907 ********* 2026-03-31 02:30:22.031901 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:30:22.031907 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:30:22.031914 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:30:22.031921 | orchestrator | 2026-03-31 02:30:22.031927 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-03-31 02:30:22.031934 | orchestrator | Tuesday 31 March 2026 02:30:15 +0000 (0:00:02.142) 0:02:48.050 ********* 2026-03-31 02:30:22.031941 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:30:22.031948 | orchestrator | 2026-03-31 02:30:22.031955 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-03-31 02:30:22.031962 | orchestrator | Tuesday 31 March 2026 02:30:16 +0000 (0:00:01.385) 0:02:49.435 ********* 2026-03-31 02:30:22.031969 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-31 02:30:22.031976 | orchestrator | 2026-03-31 02:30:22.031982 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-03-31 02:30:22.032043 | orchestrator | Tuesday 31 March 2026 02:30:19 +0000 (0:00:03.199) 0:02:52.635 ********* 2026-03-31 02:30:22.032091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-31 02:30:22.032104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-31 02:30:22.032111 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:30:22.032122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-31 02:30:22.032135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-31 02:30:22.032141 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:30:22.032154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-31 02:30:24.551148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-31 02:30:24.551252 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:30:24.551268 | orchestrator | 2026-03-31 02:30:24.551280 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-03-31 02:30:24.551291 | orchestrator | Tuesday 31 March 2026 02:30:22 +0000 (0:00:02.350) 0:02:54.985 ********* 2026-03-31 02:30:24.551358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-31 02:30:24.551384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-31 02:30:24.551395 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:30:24.551424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-31 02:30:24.551464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-31 02:30:24.551476 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:30:24.551487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-31 02:30:24.551504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-31 02:30:34.767362 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:30:34.767469 | orchestrator | 2026-03-31 02:30:34.767482 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-03-31 02:30:34.767493 | orchestrator | Tuesday 31 March 2026 02:30:24 +0000 (0:00:02.523) 0:02:57.509 ********* 2026-03-31 02:30:34.767504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-31 02:30:34.767536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-31 02:30:34.767558 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:30:34.767568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-31 02:30:34.767578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-31 02:30:34.767587 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:30:34.767596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-31 02:30:34.767605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-31 02:30:34.767614 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:30:34.767666 | orchestrator | 2026-03-31 02:30:34.767675 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-03-31 02:30:34.767684 | orchestrator | Tuesday 31 March 2026 02:30:27 +0000 (0:00:03.036) 0:03:00.545 ********* 2026-03-31 02:30:34.767693 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:30:34.767723 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:30:34.767733 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:30:34.767742 | orchestrator | 2026-03-31 02:30:34.767751 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-03-31 02:30:34.767759 | orchestrator | Tuesday 31 March 2026 02:30:29 +0000 (0:00:02.100) 0:03:02.646 ********* 2026-03-31 02:30:34.767768 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:30:34.767776 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:30:34.767789 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:30:34.767803 | orchestrator | 2026-03-31 02:30:34.767818 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-03-31 02:30:34.767833 | orchestrator | Tuesday 31 March 2026 02:30:31 +0000 (0:00:01.612) 0:03:04.258 ********* 2026-03-31 02:30:34.767847 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:30:34.767861 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:30:34.767875 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:30:34.767890 | orchestrator | 2026-03-31 02:30:34.767906 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-03-31 02:30:34.767923 | orchestrator | Tuesday 31 March 2026 02:30:31 +0000 (0:00:00.342) 0:03:04.601 ********* 2026-03-31 02:30:34.767938 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:30:34.767953 | orchestrator | 2026-03-31 02:30:34.767963 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-03-31 02:30:34.767973 | orchestrator | Tuesday 31 March 2026 02:30:33 +0000 (0:00:01.404) 0:03:06.005 ********* 2026-03-31 02:30:34.767991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-31 02:30:34.768006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-31 02:30:34.768016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-31 02:30:34.768026 | orchestrator | 2026-03-31 02:30:34.768036 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-03-31 02:30:34.768095 | orchestrator | Tuesday 31 March 2026 02:30:34 +0000 (0:00:01.514) 0:03:07.520 ********* 2026-03-31 02:30:34.768117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-31 02:30:43.351887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-31 02:30:43.351997 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:30:43.352013 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:30:43.352024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-31 02:30:43.352035 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:30:43.352046 | orchestrator | 2026-03-31 02:30:43.352056 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-03-31 02:30:43.352067 | orchestrator | Tuesday 31 March 2026 02:30:34 +0000 (0:00:00.391) 0:03:07.912 ********* 2026-03-31 02:30:43.352078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-31 02:30:43.352090 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:30:43.352100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-31 02:30:43.352110 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:30:43.352119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-31 02:30:43.352152 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:30:43.352162 | orchestrator | 2026-03-31 02:30:43.352211 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-03-31 02:30:43.352222 | orchestrator | Tuesday 31 March 2026 02:30:35 +0000 (0:00:00.887) 0:03:08.799 ********* 2026-03-31 02:30:43.352232 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:30:43.352241 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:30:43.352251 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:30:43.352260 | orchestrator | 2026-03-31 02:30:43.352270 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-03-31 02:30:43.352280 | orchestrator | Tuesday 31 March 2026 02:30:36 +0000 (0:00:00.450) 0:03:09.249 ********* 2026-03-31 02:30:43.352289 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:30:43.352299 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:30:43.352308 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:30:43.352318 | orchestrator | 2026-03-31 02:30:43.352327 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-03-31 02:30:43.352337 | orchestrator | Tuesday 31 March 2026 02:30:37 +0000 (0:00:01.361) 0:03:10.611 ********* 2026-03-31 02:30:43.352346 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:30:43.352356 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:30:43.352365 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:30:43.352375 | orchestrator | 2026-03-31 02:30:43.352384 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-03-31 02:30:43.352394 | orchestrator | Tuesday 31 March 2026 02:30:37 +0000 (0:00:00.348) 0:03:10.960 ********* 2026-03-31 02:30:43.352403 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:30:43.352413 | orchestrator | 2026-03-31 02:30:43.352425 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-03-31 02:30:43.352435 | orchestrator | Tuesday 31 March 2026 02:30:39 +0000 (0:00:01.507) 0:03:12.467 ********* 2026-03-31 02:30:43.352464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-31 02:30:43.352484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-31 02:30:43.352497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-31 02:30:43.352520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-31 02:30:43.352532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-31 02:30:43.352551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-31 02:30:43.556076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-31 02:30:43.556181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-31 02:30:43.556223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-31 02:30:43.556239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-31 02:30:43.556251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-31 02:30:43.556263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-31 02:30:43.556302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-31 02:30:43.556316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-31 02:30:43.556336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-31 02:30:43.556348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-31 02:30:43.556359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-31 02:30:43.556371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-31 02:30:43.556396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-31 02:30:43.712228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-31 02:30:43.712391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-31 02:30:43.712420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-31 02:30:43.712434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-31 02:30:43.712447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-31 02:30:43.712489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-31 02:30:43.712511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-31 02:30:43.712523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-31 02:30:43.712535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-31 02:30:43.712547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-31 02:30:43.712560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-31 02:30:43.712585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-31 02:30:43.926573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-31 02:30:43.926749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-31 02:30:43.926768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-31 02:30:43.926780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-31 02:30:43.926793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-31 02:30:43.926847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-31 02:30:43.926882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-31 02:30:43.926895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-31 02:30:43.926906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-31 02:30:43.926918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-31 02:30:43.926929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-31 02:30:43.926940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-31 02:30:43.926966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-31 02:30:45.017265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-31 02:30:45.017387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-31 02:30:45.017405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-31 02:30:45.017421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-31 02:30:45.017433 | orchestrator | 2026-03-31 02:30:45.017488 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-03-31 02:30:45.017526 | orchestrator | Tuesday 31 March 2026 02:30:43 +0000 (0:00:04.417) 0:03:16.885 ********* 2026-03-31 02:30:45.017547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-31 02:30:45.017572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-31 02:30:45.017580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-31 02:30:45.017586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-31 02:30:45.017592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-31 02:30:45.017607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-31 02:30:45.017621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-31 02:30:45.202316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-31 02:30:45.202465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-31 02:30:45.202486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-31 02:30:45.202500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-31 02:30:45.202547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-31 02:30:45.202577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-31 02:30:45.202611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-31 02:30:45.202626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-31 02:30:45.202733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-31 02:30:45.202755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-31 02:30:45.202790 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:30:45.202822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-31 02:30:45.202856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-31 02:30:45.358306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-31 02:30:45.358388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-31 02:30:45.358399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-31 02:30:45.358424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-31 02:30:45.358433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-31 02:30:45.358452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-31 02:30:45.358472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-31 02:30:45.358479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-31 02:30:45.358486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-31 02:30:45.358497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-31 02:30:45.358503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-31 02:30:45.358510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-31 02:30:45.358521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-31 02:30:45.653071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-31 02:30:45.653232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-31 02:30:45.653288 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:30:45.653305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-31 02:30:45.653323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-31 02:30:45.653335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-31 02:30:45.653367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-31 02:30:45.653380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-31 02:30:45.653403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-31 02:30:45.653415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-31 02:30:45.653440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-31 02:30:45.653452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-31 02:30:45.653464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-31 02:30:45.653485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-31 02:30:56.975359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-31 02:30:56.975531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-31 02:30:56.975572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-31 02:30:56.975587 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:30:56.975601 | orchestrator | 2026-03-31 02:30:56.975613 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-03-31 02:30:56.975625 | orchestrator | Tuesday 31 March 2026 02:30:45 +0000 (0:00:01.730) 0:03:18.615 ********* 2026-03-31 02:30:56.975637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-31 02:30:56.975679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-31 02:30:56.975692 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:30:56.975703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-31 02:30:56.975714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-31 02:30:56.975724 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:30:56.975735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-31 02:30:56.975746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-31 02:30:56.975765 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:30:56.975776 | orchestrator | 2026-03-31 02:30:56.975787 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-03-31 02:30:56.975798 | orchestrator | Tuesday 31 March 2026 02:30:47 +0000 (0:00:02.199) 0:03:20.814 ********* 2026-03-31 02:30:56.975809 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:30:56.975819 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:30:56.975850 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:30:56.975862 | orchestrator | 2026-03-31 02:30:56.975873 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-03-31 02:30:56.975884 | orchestrator | Tuesday 31 March 2026 02:30:49 +0000 (0:00:01.345) 0:03:22.159 ********* 2026-03-31 02:30:56.975894 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:30:56.975905 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:30:56.975915 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:30:56.975926 | orchestrator | 2026-03-31 02:30:56.975937 | orchestrator | TASK [include_role : placement] ************************************************ 2026-03-31 02:30:56.975948 | orchestrator | Tuesday 31 March 2026 02:30:51 +0000 (0:00:02.239) 0:03:24.399 ********* 2026-03-31 02:30:56.975958 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:30:56.975969 | orchestrator | 2026-03-31 02:30:56.975980 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-03-31 02:30:56.975990 | orchestrator | Tuesday 31 March 2026 02:30:52 +0000 (0:00:01.325) 0:03:25.724 ********* 2026-03-31 02:30:56.976003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-31 02:30:56.976021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-31 02:30:56.976033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-31 02:30:56.976051 | orchestrator | 2026-03-31 02:30:56.976062 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-03-31 02:30:56.976074 | orchestrator | Tuesday 31 March 2026 02:30:56 +0000 (0:00:03.688) 0:03:29.413 ********* 2026-03-31 02:30:56.976093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-31 02:31:07.585370 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:31:07.585504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-31 02:31:07.585531 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:31:07.585568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-31 02:31:07.585586 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:31:07.585602 | orchestrator | 2026-03-31 02:31:07.585619 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-03-31 02:31:07.585638 | orchestrator | Tuesday 31 March 2026 02:30:56 +0000 (0:00:00.523) 0:03:29.937 ********* 2026-03-31 02:31:07.585683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-31 02:31:07.585733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-31 02:31:07.585754 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:31:07.585770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-31 02:31:07.585787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-31 02:31:07.585803 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:31:07.585819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-31 02:31:07.585836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-31 02:31:07.585852 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:31:07.585868 | orchestrator | 2026-03-31 02:31:07.585886 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-03-31 02:31:07.585903 | orchestrator | Tuesday 31 March 2026 02:30:57 +0000 (0:00:00.853) 0:03:30.790 ********* 2026-03-31 02:31:07.585920 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:31:07.585933 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:31:07.585945 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:31:07.585956 | orchestrator | 2026-03-31 02:31:07.585967 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-03-31 02:31:07.585978 | orchestrator | Tuesday 31 March 2026 02:30:59 +0000 (0:00:01.967) 0:03:32.757 ********* 2026-03-31 02:31:07.585989 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:31:07.586001 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:31:07.586102 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:31:07.586122 | orchestrator | 2026-03-31 02:31:07.586140 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-03-31 02:31:07.586158 | orchestrator | Tuesday 31 March 2026 02:31:01 +0000 (0:00:01.921) 0:03:34.679 ********* 2026-03-31 02:31:07.586177 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:31:07.586191 | orchestrator | 2026-03-31 02:31:07.586202 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-03-31 02:31:07.586213 | orchestrator | Tuesday 31 March 2026 02:31:03 +0000 (0:00:01.620) 0:03:36.300 ********* 2026-03-31 02:31:07.586230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-31 02:31:07.586263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 02:31:07.586275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-31 02:31:07.586296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-31 02:31:08.593867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 02:31:08.594001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-31 02:31:08.594159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-31 02:31:08.594178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 02:31:08.594189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-31 02:31:08.594200 | orchestrator | 2026-03-31 02:31:08.594213 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-03-31 02:31:08.594225 | orchestrator | Tuesday 31 March 2026 02:31:07 +0000 (0:00:04.240) 0:03:40.540 ********* 2026-03-31 02:31:08.594259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-31 02:31:08.594280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 02:31:08.594297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-31 02:31:08.594308 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:31:08.594321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-31 02:31:08.594339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 02:31:20.069290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-31 02:31:20.069401 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:31:20.069436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-31 02:31:20.069471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 02:31:20.069480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-31 02:31:20.069488 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:31:20.069497 | orchestrator | 2026-03-31 02:31:20.069506 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-03-31 02:31:20.069517 | orchestrator | Tuesday 31 March 2026 02:31:08 +0000 (0:00:01.015) 0:03:41.555 ********* 2026-03-31 02:31:20.069527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-31 02:31:20.069539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-31 02:31:20.069549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-31 02:31:20.069575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-31 02:31:20.069585 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:31:20.069594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-31 02:31:20.069603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-31 02:31:20.069618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-31 02:31:20.069626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-31 02:31:20.069634 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:31:20.069642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-31 02:31:20.069650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-31 02:31:20.069708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-31 02:31:20.069720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-31 02:31:20.069729 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:31:20.069737 | orchestrator | 2026-03-31 02:31:20.069745 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-03-31 02:31:20.069753 | orchestrator | Tuesday 31 March 2026 02:31:09 +0000 (0:00:01.299) 0:03:42.855 ********* 2026-03-31 02:31:20.069762 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:31:20.069770 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:31:20.069778 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:31:20.069786 | orchestrator | 2026-03-31 02:31:20.069795 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-03-31 02:31:20.069803 | orchestrator | Tuesday 31 March 2026 02:31:11 +0000 (0:00:01.475) 0:03:44.330 ********* 2026-03-31 02:31:20.069812 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:31:20.069820 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:31:20.069828 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:31:20.069836 | orchestrator | 2026-03-31 02:31:20.069844 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-03-31 02:31:20.069852 | orchestrator | Tuesday 31 March 2026 02:31:13 +0000 (0:00:02.183) 0:03:46.514 ********* 2026-03-31 02:31:20.069860 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:31:20.069868 | orchestrator | 2026-03-31 02:31:20.069876 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-03-31 02:31:20.069884 | orchestrator | Tuesday 31 March 2026 02:31:15 +0000 (0:00:01.596) 0:03:48.110 ********* 2026-03-31 02:31:20.069893 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-03-31 02:31:20.069902 | orchestrator | 2026-03-31 02:31:20.069911 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-03-31 02:31:20.069919 | orchestrator | Tuesday 31 March 2026 02:31:15 +0000 (0:00:00.843) 0:03:48.953 ********* 2026-03-31 02:31:20.069928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-31 02:31:20.069950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-31 02:31:32.421064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-31 02:31:32.421179 | orchestrator | 2026-03-31 02:31:32.421196 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-03-31 02:31:32.421211 | orchestrator | Tuesday 31 March 2026 02:31:20 +0000 (0:00:04.069) 0:03:53.023 ********* 2026-03-31 02:31:32.421226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-31 02:31:32.421239 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:31:32.421268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-31 02:31:32.421282 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:31:32.421294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-31 02:31:32.421306 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:31:32.421318 | orchestrator | 2026-03-31 02:31:32.421330 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-03-31 02:31:32.421342 | orchestrator | Tuesday 31 March 2026 02:31:21 +0000 (0:00:01.432) 0:03:54.455 ********* 2026-03-31 02:31:32.421355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-31 02:31:32.421375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-31 02:31:32.421446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-31 02:31:32.421462 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:31:32.421474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-31 02:31:32.421486 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:31:32.421496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-31 02:31:32.421505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-31 02:31:32.421533 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:31:32.421546 | orchestrator | 2026-03-31 02:31:32.421557 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-31 02:31:32.421569 | orchestrator | Tuesday 31 March 2026 02:31:23 +0000 (0:00:01.675) 0:03:56.130 ********* 2026-03-31 02:31:32.421581 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:31:32.421592 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:31:32.421604 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:31:32.421617 | orchestrator | 2026-03-31 02:31:32.421629 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-31 02:31:32.421641 | orchestrator | Tuesday 31 March 2026 02:31:25 +0000 (0:00:02.580) 0:03:58.711 ********* 2026-03-31 02:31:32.421653 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:31:32.421666 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:31:32.421698 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:31:32.421708 | orchestrator | 2026-03-31 02:31:32.421718 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-03-31 02:31:32.421729 | orchestrator | Tuesday 31 March 2026 02:31:28 +0000 (0:00:03.041) 0:04:01.752 ********* 2026-03-31 02:31:32.421742 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-03-31 02:31:32.421755 | orchestrator | 2026-03-31 02:31:32.421767 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-03-31 02:31:32.421780 | orchestrator | Tuesday 31 March 2026 02:31:29 +0000 (0:00:01.106) 0:04:02.859 ********* 2026-03-31 02:31:32.421801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-31 02:31:32.421815 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:31:32.421828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-31 02:31:32.421851 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:31:32.421864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-31 02:31:32.421876 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:31:32.421889 | orchestrator | 2026-03-31 02:31:32.421902 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-03-31 02:31:32.421914 | orchestrator | Tuesday 31 March 2026 02:31:30 +0000 (0:00:01.058) 0:04:03.918 ********* 2026-03-31 02:31:32.421928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-31 02:31:32.421940 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:31:32.421951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-31 02:31:32.421967 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:31:55.720266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-31 02:31:55.720395 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:31:55.720412 | orchestrator | 2026-03-31 02:31:55.720424 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-03-31 02:31:55.720434 | orchestrator | Tuesday 31 March 2026 02:31:32 +0000 (0:00:01.464) 0:04:05.382 ********* 2026-03-31 02:31:55.720445 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:31:55.720455 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:31:55.720465 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:31:55.720475 | orchestrator | 2026-03-31 02:31:55.720484 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-31 02:31:55.720494 | orchestrator | Tuesday 31 March 2026 02:31:34 +0000 (0:00:01.599) 0:04:06.982 ********* 2026-03-31 02:31:55.720504 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:31:55.720515 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:31:55.720524 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:31:55.720534 | orchestrator | 2026-03-31 02:31:55.720544 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-31 02:31:55.720553 | orchestrator | Tuesday 31 March 2026 02:31:36 +0000 (0:00:02.701) 0:04:09.683 ********* 2026-03-31 02:31:55.720586 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:31:55.720597 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:31:55.720606 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:31:55.720616 | orchestrator | 2026-03-31 02:31:55.720640 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-03-31 02:31:55.720650 | orchestrator | Tuesday 31 March 2026 02:31:39 +0000 (0:00:02.670) 0:04:12.353 ********* 2026-03-31 02:31:55.720660 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-03-31 02:31:55.720671 | orchestrator | 2026-03-31 02:31:55.720681 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-03-31 02:31:55.720691 | orchestrator | Tuesday 31 March 2026 02:31:40 +0000 (0:00:01.199) 0:04:13.553 ********* 2026-03-31 02:31:55.720754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-31 02:31:55.720767 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:31:55.720777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-31 02:31:55.720788 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:31:55.720800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-31 02:31:55.720811 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:31:55.720822 | orchestrator | 2026-03-31 02:31:55.720834 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-03-31 02:31:55.720846 | orchestrator | Tuesday 31 March 2026 02:31:41 +0000 (0:00:01.335) 0:04:14.889 ********* 2026-03-31 02:31:55.720876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-31 02:31:55.720888 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:31:55.720900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-31 02:31:55.720920 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:31:55.720932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-31 02:31:55.720944 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:31:55.720956 | orchestrator | 2026-03-31 02:31:55.720972 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-03-31 02:31:55.720984 | orchestrator | Tuesday 31 March 2026 02:31:43 +0000 (0:00:01.324) 0:04:16.214 ********* 2026-03-31 02:31:55.720995 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:31:55.721006 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:31:55.721017 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:31:55.721028 | orchestrator | 2026-03-31 02:31:55.721039 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-31 02:31:55.721050 | orchestrator | Tuesday 31 March 2026 02:31:45 +0000 (0:00:01.845) 0:04:18.059 ********* 2026-03-31 02:31:55.721061 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:31:55.721072 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:31:55.721083 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:31:55.721094 | orchestrator | 2026-03-31 02:31:55.721104 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-31 02:31:55.721115 | orchestrator | Tuesday 31 March 2026 02:31:47 +0000 (0:00:02.290) 0:04:20.350 ********* 2026-03-31 02:31:55.721127 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:31:55.721138 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:31:55.721149 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:31:55.721159 | orchestrator | 2026-03-31 02:31:55.721169 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-03-31 02:31:55.721178 | orchestrator | Tuesday 31 March 2026 02:31:50 +0000 (0:00:03.238) 0:04:23.588 ********* 2026-03-31 02:31:55.721189 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:31:55.721207 | orchestrator | 2026-03-31 02:31:55.721224 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-03-31 02:31:55.721240 | orchestrator | Tuesday 31 March 2026 02:31:51 +0000 (0:00:01.365) 0:04:24.954 ********* 2026-03-31 02:31:55.721260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-31 02:31:55.721279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-31 02:31:55.721319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-31 02:31:56.493295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-31 02:31:56.493393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-31 02:31:56.493405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-31 02:31:56.493414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-31 02:31:56.493423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-31 02:31:56.493448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-31 02:31:56.493472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-31 02:31:56.493481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-31 02:31:56.493488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-31 02:31:56.493496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-31 02:31:56.493503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-31 02:31:56.493539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-31 02:31:56.493547 | orchestrator | 2026-03-31 02:31:56.493555 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-03-31 02:31:56.493563 | orchestrator | Tuesday 31 March 2026 02:31:55 +0000 (0:00:03.887) 0:04:28.842 ********* 2026-03-31 02:31:56.493578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-31 02:31:56.642537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-31 02:31:56.642620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-31 02:31:56.642630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-31 02:31:56.642639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-31 02:31:56.642663 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:31:56.642673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-31 02:31:56.642681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-31 02:31:56.642754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-31 02:31:56.642764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-31 02:31:56.642771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-31 02:31:56.642784 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:31:56.642791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-31 02:31:56.642798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-31 02:31:56.642805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-31 02:31:56.642822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-31 02:32:08.494257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-31 02:32:08.494347 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:32:08.494357 | orchestrator | 2026-03-31 02:32:08.494365 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-03-31 02:32:08.494372 | orchestrator | Tuesday 31 March 2026 02:31:56 +0000 (0:00:00.767) 0:04:29.610 ********* 2026-03-31 02:32:08.494379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-31 02:32:08.494405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-31 02:32:08.494413 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:32:08.494419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-31 02:32:08.494425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-31 02:32:08.494431 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:32:08.494437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-31 02:32:08.494442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-31 02:32:08.494448 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:32:08.494454 | orchestrator | 2026-03-31 02:32:08.494464 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-03-31 02:32:08.494473 | orchestrator | Tuesday 31 March 2026 02:31:57 +0000 (0:00:01.054) 0:04:30.664 ********* 2026-03-31 02:32:08.494483 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:32:08.494492 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:32:08.494501 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:32:08.494510 | orchestrator | 2026-03-31 02:32:08.494519 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-03-31 02:32:08.494529 | orchestrator | Tuesday 31 March 2026 02:31:59 +0000 (0:00:01.766) 0:04:32.430 ********* 2026-03-31 02:32:08.494537 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:32:08.494545 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:32:08.494555 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:32:08.494565 | orchestrator | 2026-03-31 02:32:08.494574 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-03-31 02:32:08.494584 | orchestrator | Tuesday 31 March 2026 02:32:01 +0000 (0:00:02.200) 0:04:34.631 ********* 2026-03-31 02:32:08.494594 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:32:08.494604 | orchestrator | 2026-03-31 02:32:08.494614 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-03-31 02:32:08.494623 | orchestrator | Tuesday 31 March 2026 02:32:03 +0000 (0:00:01.538) 0:04:36.169 ********* 2026-03-31 02:32:08.494645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-31 02:32:08.494671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-31 02:32:08.494685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-31 02:32:08.494693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-31 02:32:08.494704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-31 02:32:08.494776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-31 02:32:10.644816 | orchestrator | 2026-03-31 02:32:10.644917 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-03-31 02:32:10.644934 | orchestrator | Tuesday 31 March 2026 02:32:08 +0000 (0:00:05.279) 0:04:41.448 ********* 2026-03-31 02:32:10.644948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-31 02:32:10.644965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-31 02:32:10.644980 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:32:10.645010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-31 02:32:10.645024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-31 02:32:10.645079 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:32:10.645092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-31 02:32:10.645105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-31 02:32:10.645117 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:32:10.645128 | orchestrator | 2026-03-31 02:32:10.645171 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-03-31 02:32:10.645183 | orchestrator | Tuesday 31 March 2026 02:32:09 +0000 (0:00:01.160) 0:04:42.609 ********* 2026-03-31 02:32:10.645195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-31 02:32:10.645208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-31 02:32:10.645222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-31 02:32:10.645244 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:32:10.645266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-31 02:32:10.645279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-31 02:32:10.645291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-31 02:32:10.645304 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:32:10.645317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-31 02:32:10.645329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-31 02:32:10.645349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-31 02:32:17.113040 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:32:17.113138 | orchestrator | 2026-03-31 02:32:17.113149 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-03-31 02:32:17.113158 | orchestrator | Tuesday 31 March 2026 02:32:10 +0000 (0:00:00.992) 0:04:43.602 ********* 2026-03-31 02:32:17.113166 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:32:17.113174 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:32:17.113181 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:32:17.113188 | orchestrator | 2026-03-31 02:32:17.113196 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-03-31 02:32:17.113203 | orchestrator | Tuesday 31 March 2026 02:32:11 +0000 (0:00:00.459) 0:04:44.061 ********* 2026-03-31 02:32:17.113210 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:32:17.113218 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:32:17.113225 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:32:17.113232 | orchestrator | 2026-03-31 02:32:17.113240 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-03-31 02:32:17.113247 | orchestrator | Tuesday 31 March 2026 02:32:12 +0000 (0:00:01.812) 0:04:45.874 ********* 2026-03-31 02:32:17.113254 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:32:17.113262 | orchestrator | 2026-03-31 02:32:17.113269 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-03-31 02:32:17.113277 | orchestrator | Tuesday 31 March 2026 02:32:14 +0000 (0:00:01.722) 0:04:47.596 ********* 2026-03-31 02:32:17.113287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-31 02:32:17.113316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-31 02:32:17.113338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:32:17.113347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:32:17.113356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-31 02:32:17.113379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-31 02:32:17.113387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-31 02:32:17.113395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:32:17.113408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:32:17.113416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-31 02:32:17.113428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-31 02:32:17.113436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-31 02:32:17.113449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:32:18.750315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:32:18.750416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-31 02:32:18.750468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-31 02:32:18.750509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-31 02:32:18.750526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:32:18.750543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:32:18.750580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-31 02:32:18.750592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-31 02:32:18.750610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-31 02:32:18.750625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:32:18.750634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:32:18.750644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-31 02:32:18.750663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-31 02:32:19.484357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-31 02:32:19.484451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:32:19.484482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:32:19.484493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-31 02:32:19.484503 | orchestrator | 2026-03-31 02:32:19.484515 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-03-31 02:32:19.484526 | orchestrator | Tuesday 31 March 2026 02:32:18 +0000 (0:00:04.293) 0:04:51.890 ********* 2026-03-31 02:32:19.484536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-31 02:32:19.484548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-31 02:32:19.484594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:32:19.484605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:32:19.484615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-31 02:32:19.484631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-31 02:32:19.484644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-31 02:32:19.484654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:32:19.484676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-31 02:32:19.632912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:32:19.633039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-31 02:32:19.633057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:32:19.633071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-31 02:32:19.633083 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:32:19.633097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:32:19.633111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-31 02:32:19.633170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-31 02:32:19.633186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-31 02:32:19.633204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:32:19.633216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-31 02:32:19.633227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:32:19.633246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-31 02:32:19.633258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-31 02:32:19.633270 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:32:19.633291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:32:21.967147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:32:21.967261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-31 02:32:21.967276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-31 02:32:21.967287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-31 02:32:21.967313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:32:21.967322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 02:32:21.967345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-31 02:32:21.967354 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:32:21.967363 | orchestrator | 2026-03-31 02:32:21.967372 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-03-31 02:32:21.967380 | orchestrator | Tuesday 31 March 2026 02:32:19 +0000 (0:00:00.859) 0:04:52.749 ********* 2026-03-31 02:32:21.967392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-31 02:32:21.967403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-31 02:32:21.967413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-31 02:32:21.967423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-31 02:32:21.967432 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:32:21.967440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-31 02:32:21.967454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-31 02:32:21.967461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-31 02:32:21.967469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-31 02:32:21.967476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-31 02:32:21.967484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-31 02:32:21.967492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-31 02:32:21.967499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-31 02:32:21.967506 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:32:21.967514 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:32:21.967521 | orchestrator | 2026-03-31 02:32:21.967528 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-03-31 02:32:21.967535 | orchestrator | Tuesday 31 March 2026 02:32:21 +0000 (0:00:01.717) 0:04:54.466 ********* 2026-03-31 02:32:21.967543 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:32:21.967554 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:32:30.868194 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:32:30.868309 | orchestrator | 2026-03-31 02:32:30.868327 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-03-31 02:32:30.868340 | orchestrator | Tuesday 31 March 2026 02:32:21 +0000 (0:00:00.466) 0:04:54.933 ********* 2026-03-31 02:32:30.868351 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:32:30.868362 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:32:30.868373 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:32:30.868384 | orchestrator | 2026-03-31 02:32:30.868395 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-03-31 02:32:30.868414 | orchestrator | Tuesday 31 March 2026 02:32:23 +0000 (0:00:01.376) 0:04:56.309 ********* 2026-03-31 02:32:30.868432 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:32:30.868451 | orchestrator | 2026-03-31 02:32:30.868469 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-03-31 02:32:30.868488 | orchestrator | Tuesday 31 March 2026 02:32:25 +0000 (0:00:01.803) 0:04:58.113 ********* 2026-03-31 02:32:30.868510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-31 02:32:30.868569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-31 02:32:30.868643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-31 02:32:30.868665 | orchestrator | 2026-03-31 02:32:30.868682 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-03-31 02:32:30.868702 | orchestrator | Tuesday 31 March 2026 02:32:27 +0000 (0:00:02.319) 0:05:00.432 ********* 2026-03-31 02:32:30.868781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-31 02:32:30.868830 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:32:30.868853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-31 02:32:30.868875 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:32:30.868897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-31 02:32:30.868917 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:32:30.868936 | orchestrator | 2026-03-31 02:32:30.868955 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-03-31 02:32:30.868973 | orchestrator | Tuesday 31 March 2026 02:32:27 +0000 (0:00:00.411) 0:05:00.844 ********* 2026-03-31 02:32:30.868994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-31 02:32:30.869014 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:32:30.869034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-31 02:32:30.869052 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:32:30.869071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-31 02:32:30.869084 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:32:30.869095 | orchestrator | 2026-03-31 02:32:30.869105 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-03-31 02:32:30.869116 | orchestrator | Tuesday 31 March 2026 02:32:28 +0000 (0:00:00.679) 0:05:01.524 ********* 2026-03-31 02:32:30.869127 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:32:30.869137 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:32:30.869148 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:32:30.869158 | orchestrator | 2026-03-31 02:32:30.869169 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-03-31 02:32:30.869180 | orchestrator | Tuesday 31 March 2026 02:32:29 +0000 (0:00:00.889) 0:05:02.413 ********* 2026-03-31 02:32:30.869201 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:32:39.828607 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:32:39.828789 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:32:39.828800 | orchestrator | 2026-03-31 02:32:39.828805 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-03-31 02:32:39.828810 | orchestrator | Tuesday 31 March 2026 02:32:30 +0000 (0:00:01.416) 0:05:03.830 ********* 2026-03-31 02:32:39.828814 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:32:39.828819 | orchestrator | 2026-03-31 02:32:39.828823 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-03-31 02:32:39.828827 | orchestrator | Tuesday 31 March 2026 02:32:32 +0000 (0:00:01.553) 0:05:05.383 ********* 2026-03-31 02:32:39.828846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-31 02:32:39.828856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-31 02:32:39.828860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-31 02:32:39.828877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-31 02:32:39.828901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-31 02:32:39.828905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-31 02:32:39.828909 | orchestrator | 2026-03-31 02:32:39.828913 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-03-31 02:32:39.828918 | orchestrator | Tuesday 31 March 2026 02:32:38 +0000 (0:00:06.276) 0:05:11.660 ********* 2026-03-31 02:32:39.828921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-31 02:32:39.828925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-31 02:32:39.828936 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:32:45.789470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-31 02:32:45.789587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-31 02:32:45.789607 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:32:45.789622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-31 02:32:45.789634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-31 02:32:45.789670 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:32:45.789682 | orchestrator | 2026-03-31 02:32:45.789695 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-03-31 02:32:45.789707 | orchestrator | Tuesday 31 March 2026 02:32:39 +0000 (0:00:01.133) 0:05:12.793 ********* 2026-03-31 02:32:45.789737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-31 02:32:45.789810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-31 02:32:45.789825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-31 02:32:45.789845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-31 02:32:45.789856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-31 02:32:45.789867 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:32:45.789879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-31 02:32:45.789890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-31 02:32:45.789901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-31 02:32:45.789925 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:32:45.789936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-31 02:32:45.789959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-31 02:32:45.789973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-31 02:32:45.789987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-31 02:32:45.789999 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:32:45.790012 | orchestrator | 2026-03-31 02:32:45.790100 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-03-31 02:32:45.790114 | orchestrator | Tuesday 31 March 2026 02:32:40 +0000 (0:00:00.991) 0:05:13.784 ********* 2026-03-31 02:32:45.790126 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:32:45.790138 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:32:45.790151 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:32:45.790163 | orchestrator | 2026-03-31 02:32:45.790175 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-03-31 02:32:45.790188 | orchestrator | Tuesday 31 March 2026 02:32:42 +0000 (0:00:01.296) 0:05:15.081 ********* 2026-03-31 02:32:45.790200 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:32:45.790212 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:32:45.790222 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:32:45.790233 | orchestrator | 2026-03-31 02:32:45.790244 | orchestrator | TASK [include_role : swift] **************************************************** 2026-03-31 02:32:45.790255 | orchestrator | Tuesday 31 March 2026 02:32:44 +0000 (0:00:02.326) 0:05:17.408 ********* 2026-03-31 02:32:45.790266 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:32:45.790276 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:32:45.790287 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:32:45.790297 | orchestrator | 2026-03-31 02:32:45.790308 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-03-31 02:32:45.790319 | orchestrator | Tuesday 31 March 2026 02:32:45 +0000 (0:00:00.674) 0:05:18.082 ********* 2026-03-31 02:32:45.790329 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:32:45.790340 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:32:45.790351 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:32:45.790361 | orchestrator | 2026-03-31 02:32:45.790372 | orchestrator | TASK [include_role : trove] **************************************************** 2026-03-31 02:32:45.790382 | orchestrator | Tuesday 31 March 2026 02:32:45 +0000 (0:00:00.335) 0:05:18.418 ********* 2026-03-31 02:32:45.790393 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:32:45.790404 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:32:45.790423 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:33:29.266283 | orchestrator | 2026-03-31 02:33:29.266467 | orchestrator | TASK [include_role : venus] **************************************************** 2026-03-31 02:33:29.266500 | orchestrator | Tuesday 31 March 2026 02:32:45 +0000 (0:00:00.333) 0:05:18.752 ********* 2026-03-31 02:33:29.266514 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:33:29.266526 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:33:29.266537 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:33:29.266548 | orchestrator | 2026-03-31 02:33:29.266560 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-03-31 02:33:29.266571 | orchestrator | Tuesday 31 March 2026 02:32:46 +0000 (0:00:00.323) 0:05:19.075 ********* 2026-03-31 02:33:29.266582 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:33:29.266593 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:33:29.266604 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:33:29.266615 | orchestrator | 2026-03-31 02:33:29.266626 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-03-31 02:33:29.266653 | orchestrator | Tuesday 31 March 2026 02:32:46 +0000 (0:00:00.642) 0:05:19.717 ********* 2026-03-31 02:33:29.266665 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:33:29.266676 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:33:29.266687 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:33:29.266698 | orchestrator | 2026-03-31 02:33:29.266709 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-03-31 02:33:29.266720 | orchestrator | Tuesday 31 March 2026 02:32:47 +0000 (0:00:00.594) 0:05:20.311 ********* 2026-03-31 02:33:29.266731 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:33:29.266743 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:33:29.266754 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:33:29.266764 | orchestrator | 2026-03-31 02:33:29.266775 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-03-31 02:33:29.266841 | orchestrator | Tuesday 31 March 2026 02:32:48 +0000 (0:00:00.694) 0:05:21.005 ********* 2026-03-31 02:33:29.266857 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:33:29.266870 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:33:29.266882 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:33:29.266894 | orchestrator | 2026-03-31 02:33:29.266907 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-03-31 02:33:29.266919 | orchestrator | Tuesday 31 March 2026 02:32:48 +0000 (0:00:00.372) 0:05:21.378 ********* 2026-03-31 02:33:29.266932 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:33:29.266944 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:33:29.266956 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:33:29.266969 | orchestrator | 2026-03-31 02:33:29.266986 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-03-31 02:33:29.267007 | orchestrator | Tuesday 31 March 2026 02:32:49 +0000 (0:00:01.359) 0:05:22.737 ********* 2026-03-31 02:33:29.267027 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:33:29.267046 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:33:29.267065 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:33:29.267085 | orchestrator | 2026-03-31 02:33:29.267104 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-03-31 02:33:29.267123 | orchestrator | Tuesday 31 March 2026 02:32:50 +0000 (0:00:00.920) 0:05:23.658 ********* 2026-03-31 02:33:29.267142 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:33:29.267162 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:33:29.267183 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:33:29.267203 | orchestrator | 2026-03-31 02:33:29.267223 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-03-31 02:33:29.267242 | orchestrator | Tuesday 31 March 2026 02:32:51 +0000 (0:00:00.895) 0:05:24.554 ********* 2026-03-31 02:33:29.267262 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:33:29.267283 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:33:29.267304 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:33:29.267323 | orchestrator | 2026-03-31 02:33:29.267335 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-03-31 02:33:29.267346 | orchestrator | Tuesday 31 March 2026 02:32:56 +0000 (0:00:04.653) 0:05:29.207 ********* 2026-03-31 02:33:29.267357 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:33:29.267367 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:33:29.267378 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:33:29.267389 | orchestrator | 2026-03-31 02:33:29.267399 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-03-31 02:33:29.267410 | orchestrator | Tuesday 31 March 2026 02:32:59 +0000 (0:00:03.160) 0:05:32.367 ********* 2026-03-31 02:33:29.267421 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:33:29.267432 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:33:29.267443 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:33:29.267453 | orchestrator | 2026-03-31 02:33:29.267464 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-03-31 02:33:29.267475 | orchestrator | Tuesday 31 March 2026 02:33:14 +0000 (0:00:15.395) 0:05:47.762 ********* 2026-03-31 02:33:29.267486 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:33:29.267496 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:33:29.267507 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:33:29.267518 | orchestrator | 2026-03-31 02:33:29.267528 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-03-31 02:33:29.267539 | orchestrator | Tuesday 31 March 2026 02:33:15 +0000 (0:00:00.749) 0:05:48.511 ********* 2026-03-31 02:33:29.267550 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:33:29.267560 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:33:29.267571 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:33:29.267582 | orchestrator | 2026-03-31 02:33:29.267593 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-03-31 02:33:29.267603 | orchestrator | Tuesday 31 March 2026 02:33:23 +0000 (0:00:07.990) 0:05:56.502 ********* 2026-03-31 02:33:29.267629 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:33:29.267640 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:33:29.267651 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:33:29.267662 | orchestrator | 2026-03-31 02:33:29.267672 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-03-31 02:33:29.267683 | orchestrator | Tuesday 31 March 2026 02:33:24 +0000 (0:00:00.747) 0:05:57.249 ********* 2026-03-31 02:33:29.267694 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:33:29.267705 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:33:29.267716 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:33:29.267726 | orchestrator | 2026-03-31 02:33:29.267761 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-03-31 02:33:29.267773 | orchestrator | Tuesday 31 March 2026 02:33:24 +0000 (0:00:00.367) 0:05:57.616 ********* 2026-03-31 02:33:29.267784 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:33:29.267836 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:33:29.267847 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:33:29.267858 | orchestrator | 2026-03-31 02:33:29.267869 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-03-31 02:33:29.267880 | orchestrator | Tuesday 31 March 2026 02:33:25 +0000 (0:00:00.404) 0:05:58.021 ********* 2026-03-31 02:33:29.267891 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:33:29.267902 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:33:29.267913 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:33:29.267924 | orchestrator | 2026-03-31 02:33:29.267936 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-03-31 02:33:29.267956 | orchestrator | Tuesday 31 March 2026 02:33:25 +0000 (0:00:00.363) 0:05:58.384 ********* 2026-03-31 02:33:29.267975 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:33:29.268004 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:33:29.268023 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:33:29.268043 | orchestrator | 2026-03-31 02:33:29.268063 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-03-31 02:33:29.268082 | orchestrator | Tuesday 31 March 2026 02:33:26 +0000 (0:00:00.710) 0:05:59.095 ********* 2026-03-31 02:33:29.268100 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:33:29.268119 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:33:29.268137 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:33:29.268155 | orchestrator | 2026-03-31 02:33:29.268173 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-03-31 02:33:29.268192 | orchestrator | Tuesday 31 March 2026 02:33:26 +0000 (0:00:00.365) 0:05:59.461 ********* 2026-03-31 02:33:29.268210 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:33:29.268228 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:33:29.268246 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:33:29.268265 | orchestrator | 2026-03-31 02:33:29.268286 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-03-31 02:33:29.268305 | orchestrator | Tuesday 31 March 2026 02:33:27 +0000 (0:00:01.016) 0:06:00.477 ********* 2026-03-31 02:33:29.268325 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:33:29.268346 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:33:29.268366 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:33:29.268378 | orchestrator | 2026-03-31 02:33:29.268389 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 02:33:29.268401 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-31 02:33:29.268414 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-31 02:33:29.268424 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-31 02:33:29.268435 | orchestrator | 2026-03-31 02:33:29.268456 | orchestrator | 2026-03-31 02:33:29.268468 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 02:33:29.268479 | orchestrator | Tuesday 31 March 2026 02:33:28 +0000 (0:00:00.872) 0:06:01.350 ********* 2026-03-31 02:33:29.268489 | orchestrator | =============================================================================== 2026-03-31 02:33:29.268500 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 15.40s 2026-03-31 02:33:29.268511 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 7.99s 2026-03-31 02:33:29.268521 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.28s 2026-03-31 02:33:29.268532 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.28s 2026-03-31 02:33:29.268542 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.65s 2026-03-31 02:33:29.268553 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.42s 2026-03-31 02:33:29.268564 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.29s 2026-03-31 02:33:29.268574 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.29s 2026-03-31 02:33:29.268585 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.24s 2026-03-31 02:33:29.268595 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.07s 2026-03-31 02:33:29.268606 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.89s 2026-03-31 02:33:29.268617 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.84s 2026-03-31 02:33:29.268627 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 3.75s 2026-03-31 02:33:29.268638 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 3.73s 2026-03-31 02:33:29.268648 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.69s 2026-03-31 02:33:29.268659 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.64s 2026-03-31 02:33:29.268670 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.54s 2026-03-31 02:33:29.268680 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 3.49s 2026-03-31 02:33:29.268691 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.38s 2026-03-31 02:33:29.268702 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 3.32s 2026-03-31 02:33:31.763053 | orchestrator | 2026-03-31 02:33:31 | INFO  | Task ecffe18d-0a92-445d-ac30-35d0011a274b (opensearch) was prepared for execution. 2026-03-31 02:33:31.763153 | orchestrator | 2026-03-31 02:33:31 | INFO  | It takes a moment until task ecffe18d-0a92-445d-ac30-35d0011a274b (opensearch) has been started and output is visible here. 2026-03-31 02:33:43.081521 | orchestrator | 2026-03-31 02:33:43.081619 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-31 02:33:43.081633 | orchestrator | 2026-03-31 02:33:43.081641 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-31 02:33:43.081650 | orchestrator | Tuesday 31 March 2026 02:33:36 +0000 (0:00:00.265) 0:00:00.265 ********* 2026-03-31 02:33:43.081658 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:33:43.081667 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:33:43.081674 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:33:43.081682 | orchestrator | 2026-03-31 02:33:43.081690 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-31 02:33:43.081697 | orchestrator | Tuesday 31 March 2026 02:33:36 +0000 (0:00:00.345) 0:00:00.610 ********* 2026-03-31 02:33:43.081720 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-31 02:33:43.081728 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-03-31 02:33:43.081736 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-03-31 02:33:43.081743 | orchestrator | 2026-03-31 02:33:43.081751 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-31 02:33:43.081778 | orchestrator | 2026-03-31 02:33:43.081843 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-31 02:33:43.081860 | orchestrator | Tuesday 31 March 2026 02:33:36 +0000 (0:00:00.469) 0:00:01.080 ********* 2026-03-31 02:33:43.081868 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:33:43.081876 | orchestrator | 2026-03-31 02:33:43.081883 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-03-31 02:33:43.081891 | orchestrator | Tuesday 31 March 2026 02:33:37 +0000 (0:00:00.493) 0:00:01.574 ********* 2026-03-31 02:33:43.081898 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-31 02:33:43.081905 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-31 02:33:43.081913 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-31 02:33:43.081921 | orchestrator | 2026-03-31 02:33:43.081928 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-03-31 02:33:43.081935 | orchestrator | Tuesday 31 March 2026 02:33:38 +0000 (0:00:00.698) 0:00:02.272 ********* 2026-03-31 02:33:43.081945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-31 02:33:43.081957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-31 02:33:43.081980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-31 02:33:43.081996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-31 02:33:43.082014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-31 02:33:43.082085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-31 02:33:43.082095 | orchestrator | 2026-03-31 02:33:43.082105 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-31 02:33:43.082113 | orchestrator | Tuesday 31 March 2026 02:33:40 +0000 (0:00:01.958) 0:00:04.230 ********* 2026-03-31 02:33:43.082122 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:33:43.082131 | orchestrator | 2026-03-31 02:33:43.082139 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-03-31 02:33:43.082148 | orchestrator | Tuesday 31 March 2026 02:33:40 +0000 (0:00:00.574) 0:00:04.805 ********* 2026-03-31 02:33:43.082169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-31 02:33:43.929416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-31 02:33:43.929495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-31 02:33:43.929508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-31 02:33:43.929519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-31 02:33:43.929580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-31 02:33:43.929591 | orchestrator | 2026-03-31 02:33:43.929597 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-03-31 02:33:43.929603 | orchestrator | Tuesday 31 March 2026 02:33:43 +0000 (0:00:02.406) 0:00:07.212 ********* 2026-03-31 02:33:43.929609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-31 02:33:43.929614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-31 02:33:43.929618 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:33:43.929624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-31 02:33:43.929648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-31 02:33:44.993353 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:33:44.993466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-31 02:33:44.993488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-31 02:33:44.993503 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:33:44.993515 | orchestrator | 2026-03-31 02:33:44.993527 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-03-31 02:33:44.993539 | orchestrator | Tuesday 31 March 2026 02:33:43 +0000 (0:00:00.846) 0:00:08.058 ********* 2026-03-31 02:33:44.993576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-31 02:33:44.993606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-31 02:33:44.993638 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:33:44.993651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-31 02:33:44.993663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-31 02:33:44.993706 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:33:44.993729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-31 02:33:44.993748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-31 02:33:44.993760 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:33:44.993771 | orchestrator | 2026-03-31 02:33:44.993782 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-03-31 02:33:44.993846 | orchestrator | Tuesday 31 March 2026 02:33:44 +0000 (0:00:01.062) 0:00:09.121 ********* 2026-03-31 02:33:53.608224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-31 02:33:53.608340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-31 02:33:53.608356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-31 02:33:53.608407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-31 02:33:53.608441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-31 02:33:53.608454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-31 02:33:53.608473 | orchestrator | 2026-03-31 02:33:53.608486 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-03-31 02:33:53.608497 | orchestrator | Tuesday 31 March 2026 02:33:47 +0000 (0:00:02.341) 0:00:11.462 ********* 2026-03-31 02:33:53.608507 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:33:53.608518 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:33:53.608528 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:33:53.608538 | orchestrator | 2026-03-31 02:33:53.608548 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-03-31 02:33:53.608558 | orchestrator | Tuesday 31 March 2026 02:33:49 +0000 (0:00:02.502) 0:00:13.965 ********* 2026-03-31 02:33:53.608568 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:33:53.608578 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:33:53.608587 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:33:53.608597 | orchestrator | 2026-03-31 02:33:53.608607 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-03-31 02:33:53.608617 | orchestrator | Tuesday 31 March 2026 02:33:51 +0000 (0:00:01.950) 0:00:15.916 ********* 2026-03-31 02:33:53.608627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-31 02:33:53.608643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-31 02:33:53.608662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-31 02:36:38.844750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-31 02:36:38.844904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-31 02:36:38.845803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-31 02:36:38.845876 | orchestrator | 2026-03-31 02:36:38.845892 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-31 02:36:38.845904 | orchestrator | Tuesday 31 March 2026 02:33:53 +0000 (0:00:01.821) 0:00:17.737 ********* 2026-03-31 02:36:38.845915 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:36:38.845926 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:36:38.845937 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:36:38.845946 | orchestrator | 2026-03-31 02:36:38.845957 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-31 02:36:38.845968 | orchestrator | Tuesday 31 March 2026 02:33:53 +0000 (0:00:00.341) 0:00:18.079 ********* 2026-03-31 02:36:38.845977 | orchestrator | 2026-03-31 02:36:38.845988 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-31 02:36:38.846068 | orchestrator | Tuesday 31 March 2026 02:33:54 +0000 (0:00:00.064) 0:00:18.143 ********* 2026-03-31 02:36:38.846078 | orchestrator | 2026-03-31 02:36:38.846084 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-31 02:36:38.846106 | orchestrator | Tuesday 31 March 2026 02:33:54 +0000 (0:00:00.083) 0:00:18.227 ********* 2026-03-31 02:36:38.846113 | orchestrator | 2026-03-31 02:36:38.846118 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-03-31 02:36:38.846146 | orchestrator | Tuesday 31 March 2026 02:33:54 +0000 (0:00:00.083) 0:00:18.311 ********* 2026-03-31 02:36:38.846152 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:36:38.846158 | orchestrator | 2026-03-31 02:36:38.846164 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-03-31 02:36:38.846170 | orchestrator | Tuesday 31 March 2026 02:33:54 +0000 (0:00:00.216) 0:00:18.527 ********* 2026-03-31 02:36:38.846176 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:36:38.846182 | orchestrator | 2026-03-31 02:36:38.846187 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-03-31 02:36:38.846193 | orchestrator | Tuesday 31 March 2026 02:33:55 +0000 (0:00:00.702) 0:00:19.229 ********* 2026-03-31 02:36:38.846199 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:36:38.846205 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:36:38.846210 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:36:38.846216 | orchestrator | 2026-03-31 02:36:38.846222 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-03-31 02:36:38.846228 | orchestrator | Tuesday 31 March 2026 02:35:03 +0000 (0:01:08.188) 0:01:27.417 ********* 2026-03-31 02:36:38.846234 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:36:38.846239 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:36:38.846245 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:36:38.846251 | orchestrator | 2026-03-31 02:36:38.846257 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-31 02:36:38.846262 | orchestrator | Tuesday 31 March 2026 02:36:27 +0000 (0:01:24.598) 0:02:52.016 ********* 2026-03-31 02:36:38.846269 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:36:38.846275 | orchestrator | 2026-03-31 02:36:38.846281 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-03-31 02:36:38.846286 | orchestrator | Tuesday 31 March 2026 02:36:28 +0000 (0:00:00.572) 0:02:52.589 ********* 2026-03-31 02:36:38.846292 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:36:38.846298 | orchestrator | 2026-03-31 02:36:38.846304 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-03-31 02:36:38.846310 | orchestrator | Tuesday 31 March 2026 02:36:31 +0000 (0:00:03.019) 0:02:55.609 ********* 2026-03-31 02:36:38.846315 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:36:38.846321 | orchestrator | 2026-03-31 02:36:38.846327 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-03-31 02:36:38.846333 | orchestrator | Tuesday 31 March 2026 02:36:33 +0000 (0:00:02.271) 0:02:57.881 ********* 2026-03-31 02:36:38.846339 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:36:38.846344 | orchestrator | 2026-03-31 02:36:38.846350 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-03-31 02:36:38.846356 | orchestrator | Tuesday 31 March 2026 02:36:36 +0000 (0:00:02.656) 0:03:00.538 ********* 2026-03-31 02:36:38.846361 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:36:38.846367 | orchestrator | 2026-03-31 02:36:38.846373 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 02:36:38.846380 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-31 02:36:38.846387 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-31 02:36:38.846399 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-31 02:36:38.846405 | orchestrator | 2026-03-31 02:36:38.846411 | orchestrator | 2026-03-31 02:36:38.846422 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 02:36:38.846427 | orchestrator | Tuesday 31 March 2026 02:36:38 +0000 (0:00:02.419) 0:03:02.957 ********* 2026-03-31 02:36:38.846433 | orchestrator | =============================================================================== 2026-03-31 02:36:38.846439 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 84.60s 2026-03-31 02:36:38.846445 | orchestrator | opensearch : Restart opensearch container ------------------------------ 68.19s 2026-03-31 02:36:38.846451 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 3.02s 2026-03-31 02:36:38.846457 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.66s 2026-03-31 02:36:38.846462 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.50s 2026-03-31 02:36:38.846468 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.42s 2026-03-31 02:36:38.846474 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.41s 2026-03-31 02:36:38.846480 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.34s 2026-03-31 02:36:38.846486 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.27s 2026-03-31 02:36:38.846492 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.96s 2026-03-31 02:36:38.846497 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.95s 2026-03-31 02:36:38.846503 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.82s 2026-03-31 02:36:38.846509 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.06s 2026-03-31 02:36:38.846515 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.85s 2026-03-31 02:36:38.846521 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.70s 2026-03-31 02:36:38.846527 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.70s 2026-03-31 02:36:38.846537 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.57s 2026-03-31 02:36:39.224242 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.57s 2026-03-31 02:36:39.224329 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.49s 2026-03-31 02:36:39.224339 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.47s 2026-03-31 02:36:41.688253 | orchestrator | 2026-03-31 02:36:41 | INFO  | Task 37578000-fc6c-45ee-a0cf-de7c5b3c2d77 (memcached) was prepared for execution. 2026-03-31 02:36:41.688369 | orchestrator | 2026-03-31 02:36:41 | INFO  | It takes a moment until task 37578000-fc6c-45ee-a0cf-de7c5b3c2d77 (memcached) has been started and output is visible here. 2026-03-31 02:36:59.000908 | orchestrator | 2026-03-31 02:36:59.001003 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-31 02:36:59.001064 | orchestrator | 2026-03-31 02:36:59.001074 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-31 02:36:59.001084 | orchestrator | Tuesday 31 March 2026 02:36:46 +0000 (0:00:00.274) 0:00:00.274 ********* 2026-03-31 02:36:59.001092 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:36:59.001102 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:36:59.001110 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:36:59.001118 | orchestrator | 2026-03-31 02:36:59.001126 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-31 02:36:59.001134 | orchestrator | Tuesday 31 March 2026 02:36:46 +0000 (0:00:00.323) 0:00:00.598 ********* 2026-03-31 02:36:59.001143 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-03-31 02:36:59.001151 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-03-31 02:36:59.001159 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-03-31 02:36:59.001167 | orchestrator | 2026-03-31 02:36:59.001175 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-03-31 02:36:59.001209 | orchestrator | 2026-03-31 02:36:59.001218 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-03-31 02:36:59.001225 | orchestrator | Tuesday 31 March 2026 02:36:46 +0000 (0:00:00.433) 0:00:01.031 ********* 2026-03-31 02:36:59.001234 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:36:59.001242 | orchestrator | 2026-03-31 02:36:59.001250 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-03-31 02:36:59.001258 | orchestrator | Tuesday 31 March 2026 02:36:47 +0000 (0:00:00.540) 0:00:01.572 ********* 2026-03-31 02:36:59.001265 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-31 02:36:59.001273 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-31 02:36:59.001281 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-31 02:36:59.001289 | orchestrator | 2026-03-31 02:36:59.001297 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-03-31 02:36:59.001304 | orchestrator | Tuesday 31 March 2026 02:36:48 +0000 (0:00:00.654) 0:00:02.227 ********* 2026-03-31 02:36:59.001312 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-31 02:36:59.001320 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-31 02:36:59.001327 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-31 02:36:59.001335 | orchestrator | 2026-03-31 02:36:59.001343 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-03-31 02:36:59.001351 | orchestrator | Tuesday 31 March 2026 02:36:49 +0000 (0:00:01.856) 0:00:04.084 ********* 2026-03-31 02:36:59.001372 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:36:59.001380 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:36:59.001388 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:36:59.001395 | orchestrator | 2026-03-31 02:36:59.001403 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-03-31 02:36:59.001411 | orchestrator | Tuesday 31 March 2026 02:36:51 +0000 (0:00:01.476) 0:00:05.560 ********* 2026-03-31 02:36:59.001418 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:36:59.001426 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:36:59.001434 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:36:59.001442 | orchestrator | 2026-03-31 02:36:59.001451 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 02:36:59.001461 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 02:36:59.001471 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 02:36:59.001480 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 02:36:59.001489 | orchestrator | 2026-03-31 02:36:59.001498 | orchestrator | 2026-03-31 02:36:59.001507 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 02:36:59.001517 | orchestrator | Tuesday 31 March 2026 02:36:58 +0000 (0:00:07.105) 0:00:12.666 ********* 2026-03-31 02:36:59.001526 | orchestrator | =============================================================================== 2026-03-31 02:36:59.001535 | orchestrator | memcached : Restart memcached container --------------------------------- 7.11s 2026-03-31 02:36:59.001544 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.86s 2026-03-31 02:36:59.001554 | orchestrator | memcached : Check memcached container ----------------------------------- 1.48s 2026-03-31 02:36:59.001563 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.65s 2026-03-31 02:36:59.001572 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.54s 2026-03-31 02:36:59.001584 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.43s 2026-03-31 02:36:59.001598 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2026-03-31 02:37:01.493359 | orchestrator | 2026-03-31 02:37:01 | INFO  | Task 9feb90b9-6b4a-40a1-912a-f61e52c6cff5 (redis) was prepared for execution. 2026-03-31 02:37:01.493442 | orchestrator | 2026-03-31 02:37:01 | INFO  | It takes a moment until task 9feb90b9-6b4a-40a1-912a-f61e52c6cff5 (redis) has been started and output is visible here. 2026-03-31 02:37:10.945296 | orchestrator | 2026-03-31 02:37:10.945386 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-31 02:37:10.945397 | orchestrator | 2026-03-31 02:37:10.945405 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-31 02:37:10.945412 | orchestrator | Tuesday 31 March 2026 02:37:06 +0000 (0:00:00.274) 0:00:00.274 ********* 2026-03-31 02:37:10.945420 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:37:10.945429 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:37:10.945436 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:37:10.945443 | orchestrator | 2026-03-31 02:37:10.945450 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-31 02:37:10.945458 | orchestrator | Tuesday 31 March 2026 02:37:06 +0000 (0:00:00.312) 0:00:00.586 ********* 2026-03-31 02:37:10.945465 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-03-31 02:37:10.945473 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-03-31 02:37:10.945480 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-03-31 02:37:10.945487 | orchestrator | 2026-03-31 02:37:10.945495 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-03-31 02:37:10.945502 | orchestrator | 2026-03-31 02:37:10.945509 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-03-31 02:37:10.945516 | orchestrator | Tuesday 31 March 2026 02:37:06 +0000 (0:00:00.443) 0:00:01.030 ********* 2026-03-31 02:37:10.945524 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:37:10.945532 | orchestrator | 2026-03-31 02:37:10.945539 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-03-31 02:37:10.945546 | orchestrator | Tuesday 31 March 2026 02:37:07 +0000 (0:00:00.496) 0:00:01.527 ********* 2026-03-31 02:37:10.945556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-31 02:37:10.945570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-31 02:37:10.945579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-31 02:37:10.945607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-31 02:37:10.945630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-31 02:37:10.945639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-31 02:37:10.945646 | orchestrator | 2026-03-31 02:37:10.945654 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-03-31 02:37:10.945661 | orchestrator | Tuesday 31 March 2026 02:37:08 +0000 (0:00:01.082) 0:00:02.609 ********* 2026-03-31 02:37:10.945669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-31 02:37:10.945749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-31 02:37:10.945763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-31 02:37:10.945778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-31 02:37:10.945792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-31 02:37:15.289491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-31 02:37:15.289590 | orchestrator | 2026-03-31 02:37:15.289601 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-03-31 02:37:15.289610 | orchestrator | Tuesday 31 March 2026 02:37:10 +0000 (0:00:02.590) 0:00:05.200 ********* 2026-03-31 02:37:15.289618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-31 02:37:15.289642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-31 02:37:15.289649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-31 02:37:15.289675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-31 02:37:15.289683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-31 02:37:15.289705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-31 02:37:15.289716 | orchestrator | 2026-03-31 02:37:15.289726 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-03-31 02:37:15.289737 | orchestrator | Tuesday 31 March 2026 02:37:13 +0000 (0:00:02.558) 0:00:07.758 ********* 2026-03-31 02:37:15.289747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-31 02:37:15.289757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-31 02:37:15.289772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-31 02:37:15.289795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-31 02:37:15.289807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-31 02:37:15.289828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-31 02:37:26.521582 | orchestrator | 2026-03-31 02:37:26.521682 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-31 02:37:26.521695 | orchestrator | Tuesday 31 March 2026 02:37:15 +0000 (0:00:01.585) 0:00:09.343 ********* 2026-03-31 02:37:26.521703 | orchestrator | 2026-03-31 02:37:26.521711 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-31 02:37:26.521718 | orchestrator | Tuesday 31 March 2026 02:37:15 +0000 (0:00:00.071) 0:00:09.415 ********* 2026-03-31 02:37:26.521726 | orchestrator | 2026-03-31 02:37:26.521734 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-31 02:37:26.521742 | orchestrator | Tuesday 31 March 2026 02:37:15 +0000 (0:00:00.065) 0:00:09.481 ********* 2026-03-31 02:37:26.521766 | orchestrator | 2026-03-31 02:37:26.521820 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-03-31 02:37:26.521828 | orchestrator | Tuesday 31 March 2026 02:37:15 +0000 (0:00:00.064) 0:00:09.546 ********* 2026-03-31 02:37:26.521834 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:37:26.521841 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:37:26.521847 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:37:26.521852 | orchestrator | 2026-03-31 02:37:26.521858 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-03-31 02:37:26.521864 | orchestrator | Tuesday 31 March 2026 02:37:18 +0000 (0:00:02.869) 0:00:12.415 ********* 2026-03-31 02:37:26.521888 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:37:26.521894 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:37:26.521899 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:37:26.521905 | orchestrator | 2026-03-31 02:37:26.521913 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 02:37:26.521922 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 02:37:26.521931 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 02:37:26.521953 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 02:37:26.521960 | orchestrator | 2026-03-31 02:37:26.521969 | orchestrator | 2026-03-31 02:37:26.521975 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 02:37:26.521983 | orchestrator | Tuesday 31 March 2026 02:37:26 +0000 (0:00:07.989) 0:00:20.405 ********* 2026-03-31 02:37:26.521990 | orchestrator | =============================================================================== 2026-03-31 02:37:26.521997 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 7.99s 2026-03-31 02:37:26.522004 | orchestrator | redis : Restart redis container ----------------------------------------- 2.87s 2026-03-31 02:37:26.522011 | orchestrator | redis : Copying over default config.json files -------------------------- 2.59s 2026-03-31 02:37:26.522124 | orchestrator | redis : Copying over redis config files --------------------------------- 2.56s 2026-03-31 02:37:26.522140 | orchestrator | redis : Check redis containers ------------------------------------------ 1.59s 2026-03-31 02:37:26.522147 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.08s 2026-03-31 02:37:26.522155 | orchestrator | redis : include_tasks --------------------------------------------------- 0.50s 2026-03-31 02:37:26.522164 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.44s 2026-03-31 02:37:26.522173 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2026-03-31 02:37:26.522182 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.20s 2026-03-31 02:37:29.010388 | orchestrator | 2026-03-31 02:37:29 | INFO  | Task d42985c3-53c5-4c55-852d-d145c44f0be2 (mariadb) was prepared for execution. 2026-03-31 02:37:29.010510 | orchestrator | 2026-03-31 02:37:29 | INFO  | It takes a moment until task d42985c3-53c5-4c55-852d-d145c44f0be2 (mariadb) has been started and output is visible here. 2026-03-31 02:37:43.247479 | orchestrator | 2026-03-31 02:37:43.247614 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-31 02:37:43.247637 | orchestrator | 2026-03-31 02:37:43.247654 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-31 02:37:43.247669 | orchestrator | Tuesday 31 March 2026 02:37:33 +0000 (0:00:00.193) 0:00:00.193 ********* 2026-03-31 02:37:43.247684 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:37:43.247700 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:37:43.247715 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:37:43.247729 | orchestrator | 2026-03-31 02:37:43.247745 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-31 02:37:43.247761 | orchestrator | Tuesday 31 March 2026 02:37:33 +0000 (0:00:00.302) 0:00:00.496 ********* 2026-03-31 02:37:43.247776 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-31 02:37:43.247792 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-31 02:37:43.247807 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-31 02:37:43.247822 | orchestrator | 2026-03-31 02:37:43.247837 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-31 02:37:43.247852 | orchestrator | 2026-03-31 02:37:43.247867 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-31 02:37:43.247911 | orchestrator | Tuesday 31 March 2026 02:37:34 +0000 (0:00:00.654) 0:00:01.151 ********* 2026-03-31 02:37:43.247928 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-31 02:37:43.247943 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-31 02:37:43.247958 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-31 02:37:43.247972 | orchestrator | 2026-03-31 02:37:43.247999 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-31 02:37:43.248016 | orchestrator | Tuesday 31 March 2026 02:37:34 +0000 (0:00:00.428) 0:00:01.579 ********* 2026-03-31 02:37:43.248031 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:37:43.248048 | orchestrator | 2026-03-31 02:37:43.248227 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-03-31 02:37:43.248349 | orchestrator | Tuesday 31 March 2026 02:37:35 +0000 (0:00:00.569) 0:00:02.149 ********* 2026-03-31 02:37:43.248384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-31 02:37:43.248423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-31 02:37:43.248451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-31 02:37:43.248462 | orchestrator | 2026-03-31 02:37:43.248470 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-03-31 02:37:43.248479 | orchestrator | Tuesday 31 March 2026 02:37:37 +0000 (0:00:02.592) 0:00:04.741 ********* 2026-03-31 02:37:43.248488 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:37:43.248498 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:37:43.248507 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:37:43.248516 | orchestrator | 2026-03-31 02:37:43.248524 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-03-31 02:37:43.248533 | orchestrator | Tuesday 31 March 2026 02:37:38 +0000 (0:00:00.643) 0:00:05.384 ********* 2026-03-31 02:37:43.248542 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:37:43.248550 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:37:43.248559 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:37:43.248568 | orchestrator | 2026-03-31 02:37:43.248576 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-03-31 02:37:43.248585 | orchestrator | Tuesday 31 March 2026 02:37:40 +0000 (0:00:01.536) 0:00:06.920 ********* 2026-03-31 02:37:43.248603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-31 02:37:51.169488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-31 02:37:51.169568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-31 02:37:51.169592 | orchestrator | 2026-03-31 02:37:51.169599 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-03-31 02:37:51.169604 | orchestrator | Tuesday 31 March 2026 02:37:43 +0000 (0:00:03.161) 0:00:10.082 ********* 2026-03-31 02:37:51.169609 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:37:51.169614 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:37:51.169618 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:37:51.169622 | orchestrator | 2026-03-31 02:37:51.169626 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-03-31 02:37:51.169641 | orchestrator | Tuesday 31 March 2026 02:37:44 +0000 (0:00:01.136) 0:00:11.218 ********* 2026-03-31 02:37:51.169645 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:37:51.169649 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:37:51.169654 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:37:51.169658 | orchestrator | 2026-03-31 02:37:51.169662 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-31 02:37:51.169666 | orchestrator | Tuesday 31 March 2026 02:37:48 +0000 (0:00:04.005) 0:00:15.223 ********* 2026-03-31 02:37:51.169671 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:37:51.169675 | orchestrator | 2026-03-31 02:37:51.169680 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-31 02:37:51.169684 | orchestrator | Tuesday 31 March 2026 02:37:48 +0000 (0:00:00.572) 0:00:15.796 ********* 2026-03-31 02:37:51.169692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-31 02:37:51.169701 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:37:51.169709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-31 02:37:56.369950 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:37:56.370188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-31 02:37:56.370242 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:37:56.370256 | orchestrator | 2026-03-31 02:37:56.370268 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-31 02:37:56.370280 | orchestrator | Tuesday 31 March 2026 02:37:51 +0000 (0:00:02.208) 0:00:18.005 ********* 2026-03-31 02:37:56.370292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-31 02:37:56.370304 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:37:56.370345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-31 02:37:56.370368 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:37:56.370381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-31 02:37:56.370392 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:37:56.370403 | orchestrator | 2026-03-31 02:37:56.370414 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-31 02:37:56.370425 | orchestrator | Tuesday 31 March 2026 02:37:53 +0000 (0:00:02.803) 0:00:20.809 ********* 2026-03-31 02:37:56.370452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-31 02:37:59.201707 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:37:59.201831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-31 02:37:59.201867 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:37:59.201912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-31 02:37:59.201968 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:37:59.201987 | orchestrator | 2026-03-31 02:37:59.202008 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-03-31 02:37:59.202185 | orchestrator | Tuesday 31 March 2026 02:37:56 +0000 (0:00:02.399) 0:00:23.209 ********* 2026-03-31 02:37:59.202250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-31 02:37:59.202273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-31 02:37:59.202318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-31 02:40:17.653697 | orchestrator | 2026-03-31 02:40:17.653815 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-03-31 02:40:17.653832 | orchestrator | Tuesday 31 March 2026 02:37:59 +0000 (0:00:02.832) 0:00:26.041 ********* 2026-03-31 02:40:17.653844 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:40:17.653856 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:40:17.653867 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:40:17.653878 | orchestrator | 2026-03-31 02:40:17.653889 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-03-31 02:40:17.653899 | orchestrator | Tuesday 31 March 2026 02:38:00 +0000 (0:00:00.869) 0:00:26.911 ********* 2026-03-31 02:40:17.653910 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:40:17.653922 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:40:17.653933 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:40:17.653944 | orchestrator | 2026-03-31 02:40:17.653955 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-03-31 02:40:17.653965 | orchestrator | Tuesday 31 March 2026 02:38:00 +0000 (0:00:00.573) 0:00:27.485 ********* 2026-03-31 02:40:17.653976 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:40:17.653987 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:40:17.653997 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:40:17.654008 | orchestrator | 2026-03-31 02:40:17.654090 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-03-31 02:40:17.654102 | orchestrator | Tuesday 31 March 2026 02:38:00 +0000 (0:00:00.320) 0:00:27.805 ********* 2026-03-31 02:40:17.654115 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-03-31 02:40:17.654127 | orchestrator | ...ignoring 2026-03-31 02:40:17.654138 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-03-31 02:40:17.654149 | orchestrator | ...ignoring 2026-03-31 02:40:17.654160 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-03-31 02:40:17.654171 | orchestrator | ...ignoring 2026-03-31 02:40:17.654270 | orchestrator | 2026-03-31 02:40:17.654293 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-03-31 02:40:17.654311 | orchestrator | Tuesday 31 March 2026 02:38:11 +0000 (0:00:10.831) 0:00:38.637 ********* 2026-03-31 02:40:17.654330 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:40:17.654348 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:40:17.654367 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:40:17.654386 | orchestrator | 2026-03-31 02:40:17.654406 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-03-31 02:40:17.654426 | orchestrator | Tuesday 31 March 2026 02:38:12 +0000 (0:00:00.456) 0:00:39.094 ********* 2026-03-31 02:40:17.654446 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:40:17.654467 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:40:17.654489 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:40:17.654510 | orchestrator | 2026-03-31 02:40:17.654529 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-03-31 02:40:17.654542 | orchestrator | Tuesday 31 March 2026 02:38:12 +0000 (0:00:00.672) 0:00:39.766 ********* 2026-03-31 02:40:17.654554 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:40:17.654565 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:40:17.654575 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:40:17.654586 | orchestrator | 2026-03-31 02:40:17.654612 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-03-31 02:40:17.654624 | orchestrator | Tuesday 31 March 2026 02:38:13 +0000 (0:00:00.444) 0:00:40.211 ********* 2026-03-31 02:40:17.654636 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:40:17.654646 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:40:17.654657 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:40:17.654668 | orchestrator | 2026-03-31 02:40:17.654678 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-03-31 02:40:17.654689 | orchestrator | Tuesday 31 March 2026 02:38:13 +0000 (0:00:00.453) 0:00:40.665 ********* 2026-03-31 02:40:17.654699 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:40:17.654710 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:40:17.654721 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:40:17.654731 | orchestrator | 2026-03-31 02:40:17.654742 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-03-31 02:40:17.654753 | orchestrator | Tuesday 31 March 2026 02:38:14 +0000 (0:00:00.431) 0:00:41.097 ********* 2026-03-31 02:40:17.654764 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:40:17.654774 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:40:17.654785 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:40:17.654796 | orchestrator | 2026-03-31 02:40:17.654806 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-31 02:40:17.654817 | orchestrator | Tuesday 31 March 2026 02:38:14 +0000 (0:00:00.709) 0:00:41.807 ********* 2026-03-31 02:40:17.654827 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:40:17.654838 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:40:17.654848 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-03-31 02:40:17.654859 | orchestrator | 2026-03-31 02:40:17.654869 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-03-31 02:40:17.654880 | orchestrator | Tuesday 31 March 2026 02:38:15 +0000 (0:00:00.420) 0:00:42.227 ********* 2026-03-31 02:40:17.654890 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:40:17.654901 | orchestrator | 2026-03-31 02:40:17.654912 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-03-31 02:40:17.654922 | orchestrator | Tuesday 31 March 2026 02:38:25 +0000 (0:00:10.298) 0:00:52.526 ********* 2026-03-31 02:40:17.654933 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:40:17.654943 | orchestrator | 2026-03-31 02:40:17.654954 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-31 02:40:17.654965 | orchestrator | Tuesday 31 March 2026 02:38:25 +0000 (0:00:00.137) 0:00:52.664 ********* 2026-03-31 02:40:17.654976 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:40:17.655020 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:40:17.655032 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:40:17.655043 | orchestrator | 2026-03-31 02:40:17.655054 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-03-31 02:40:17.655065 | orchestrator | Tuesday 31 March 2026 02:38:26 +0000 (0:00:00.981) 0:00:53.645 ********* 2026-03-31 02:40:17.655075 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:40:17.655086 | orchestrator | 2026-03-31 02:40:17.655101 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-03-31 02:40:17.655119 | orchestrator | Tuesday 31 March 2026 02:38:34 +0000 (0:00:08.069) 0:01:01.714 ********* 2026-03-31 02:40:17.655137 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:40:17.655155 | orchestrator | 2026-03-31 02:40:17.655173 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-03-31 02:40:17.655215 | orchestrator | Tuesday 31 March 2026 02:38:37 +0000 (0:00:02.580) 0:01:04.294 ********* 2026-03-31 02:40:17.655238 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:40:17.655256 | orchestrator | 2026-03-31 02:40:17.655276 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-03-31 02:40:17.655294 | orchestrator | Tuesday 31 March 2026 02:38:39 +0000 (0:00:02.545) 0:01:06.839 ********* 2026-03-31 02:40:17.655309 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:40:17.655320 | orchestrator | 2026-03-31 02:40:17.655331 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-03-31 02:40:17.655341 | orchestrator | Tuesday 31 March 2026 02:38:40 +0000 (0:00:00.130) 0:01:06.970 ********* 2026-03-31 02:40:17.655352 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:40:17.655363 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:40:17.655373 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:40:17.655384 | orchestrator | 2026-03-31 02:40:17.655394 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-03-31 02:40:17.655405 | orchestrator | Tuesday 31 March 2026 02:38:40 +0000 (0:00:00.357) 0:01:07.327 ********* 2026-03-31 02:40:17.655416 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:40:17.655426 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-31 02:40:17.655437 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:40:17.655448 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:40:17.655516 | orchestrator | 2026-03-31 02:40:17.655527 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-31 02:40:17.655538 | orchestrator | skipping: no hosts matched 2026-03-31 02:40:17.655549 | orchestrator | 2026-03-31 02:40:17.655559 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-31 02:40:17.655570 | orchestrator | 2026-03-31 02:40:17.655581 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-31 02:40:17.655591 | orchestrator | Tuesday 31 March 2026 02:38:41 +0000 (0:00:00.544) 0:01:07.872 ********* 2026-03-31 02:40:17.655602 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:40:17.655612 | orchestrator | 2026-03-31 02:40:17.655623 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-31 02:40:17.655636 | orchestrator | Tuesday 31 March 2026 02:38:59 +0000 (0:00:18.439) 0:01:26.311 ********* 2026-03-31 02:40:17.655656 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:40:17.655676 | orchestrator | 2026-03-31 02:40:17.655698 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-31 02:40:17.655717 | orchestrator | Tuesday 31 March 2026 02:39:16 +0000 (0:00:16.595) 0:01:42.907 ********* 2026-03-31 02:40:17.655733 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:40:17.655744 | orchestrator | 2026-03-31 02:40:17.655759 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-31 02:40:17.655770 | orchestrator | 2026-03-31 02:40:17.655788 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-31 02:40:17.655799 | orchestrator | Tuesday 31 March 2026 02:39:18 +0000 (0:00:02.481) 0:01:45.388 ********* 2026-03-31 02:40:17.655824 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:40:17.655835 | orchestrator | 2026-03-31 02:40:17.655845 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-31 02:40:17.655856 | orchestrator | Tuesday 31 March 2026 02:39:37 +0000 (0:00:18.605) 0:02:03.993 ********* 2026-03-31 02:40:17.655867 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:40:17.655877 | orchestrator | 2026-03-31 02:40:17.655888 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-31 02:40:17.655898 | orchestrator | Tuesday 31 March 2026 02:39:53 +0000 (0:00:16.602) 0:02:20.596 ********* 2026-03-31 02:40:17.655909 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:40:17.655919 | orchestrator | 2026-03-31 02:40:17.655930 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-31 02:40:17.655940 | orchestrator | 2026-03-31 02:40:17.655951 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-31 02:40:17.655962 | orchestrator | Tuesday 31 March 2026 02:39:56 +0000 (0:00:02.569) 0:02:23.166 ********* 2026-03-31 02:40:17.655972 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:40:17.655983 | orchestrator | 2026-03-31 02:40:17.655994 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-31 02:40:17.656004 | orchestrator | Tuesday 31 March 2026 02:40:08 +0000 (0:00:12.478) 0:02:35.644 ********* 2026-03-31 02:40:17.656014 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:40:17.656025 | orchestrator | 2026-03-31 02:40:17.656040 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-31 02:40:17.656059 | orchestrator | Tuesday 31 March 2026 02:40:14 +0000 (0:00:05.536) 0:02:41.181 ********* 2026-03-31 02:40:17.656078 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:40:17.656096 | orchestrator | 2026-03-31 02:40:17.656114 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-31 02:40:17.656125 | orchestrator | 2026-03-31 02:40:17.656136 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-31 02:40:17.656146 | orchestrator | Tuesday 31 March 2026 02:40:16 +0000 (0:00:02.592) 0:02:43.774 ********* 2026-03-31 02:40:17.656157 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:40:17.656168 | orchestrator | 2026-03-31 02:40:17.656178 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-03-31 02:40:17.656231 | orchestrator | Tuesday 31 March 2026 02:40:17 +0000 (0:00:00.711) 0:02:44.485 ********* 2026-03-31 02:40:30.616018 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:40:30.616174 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:40:30.616275 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:40:30.616297 | orchestrator | 2026-03-31 02:40:30.616310 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-03-31 02:40:30.616323 | orchestrator | Tuesday 31 March 2026 02:40:20 +0000 (0:00:02.394) 0:02:46.880 ********* 2026-03-31 02:40:30.616334 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:40:30.616351 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:40:30.616369 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:40:30.616387 | orchestrator | 2026-03-31 02:40:30.616405 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-03-31 02:40:30.616423 | orchestrator | Tuesday 31 March 2026 02:40:22 +0000 (0:00:02.198) 0:02:49.079 ********* 2026-03-31 02:40:30.616441 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:40:30.616459 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:40:30.616478 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:40:30.616496 | orchestrator | 2026-03-31 02:40:30.616512 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-03-31 02:40:30.616530 | orchestrator | Tuesday 31 March 2026 02:40:24 +0000 (0:00:02.550) 0:02:51.629 ********* 2026-03-31 02:40:30.616547 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:40:30.616565 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:40:30.616582 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:40:30.616601 | orchestrator | 2026-03-31 02:40:30.616655 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-31 02:40:30.616674 | orchestrator | Tuesday 31 March 2026 02:40:26 +0000 (0:00:02.111) 0:02:53.741 ********* 2026-03-31 02:40:30.616691 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:40:30.616709 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:40:30.616727 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:40:30.616746 | orchestrator | 2026-03-31 02:40:30.616765 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-31 02:40:30.616783 | orchestrator | Tuesday 31 March 2026 02:40:29 +0000 (0:00:02.894) 0:02:56.635 ********* 2026-03-31 02:40:30.616803 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:40:30.616822 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:40:30.616841 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:40:30.616860 | orchestrator | 2026-03-31 02:40:30.616878 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 02:40:30.616899 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-03-31 02:40:30.616922 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-31 02:40:30.616942 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-31 02:40:30.616962 | orchestrator | 2026-03-31 02:40:30.616982 | orchestrator | 2026-03-31 02:40:30.617000 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 02:40:30.617019 | orchestrator | Tuesday 31 March 2026 02:40:30 +0000 (0:00:00.433) 0:02:57.069 ********* 2026-03-31 02:40:30.617037 | orchestrator | =============================================================================== 2026-03-31 02:40:30.617075 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 37.04s 2026-03-31 02:40:30.617094 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 33.20s 2026-03-31 02:40:30.617114 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.48s 2026-03-31 02:40:30.617132 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.83s 2026-03-31 02:40:30.617151 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.30s 2026-03-31 02:40:30.617170 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.07s 2026-03-31 02:40:30.617190 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 5.54s 2026-03-31 02:40:30.617284 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.05s 2026-03-31 02:40:30.617306 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.01s 2026-03-31 02:40:30.617326 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.16s 2026-03-31 02:40:30.617346 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.89s 2026-03-31 02:40:30.617366 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 2.83s 2026-03-31 02:40:30.617386 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.80s 2026-03-31 02:40:30.617407 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.59s 2026-03-31 02:40:30.617425 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.59s 2026-03-31 02:40:30.617444 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 2.58s 2026-03-31 02:40:30.617465 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.55s 2026-03-31 02:40:30.617485 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.55s 2026-03-31 02:40:30.617505 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.40s 2026-03-31 02:40:30.617524 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.39s 2026-03-31 02:40:33.026752 | orchestrator | 2026-03-31 02:40:33 | INFO  | Task 4403dd2f-7386-487f-9220-a5091098449d (rabbitmq) was prepared for execution. 2026-03-31 02:40:33.026880 | orchestrator | 2026-03-31 02:40:33 | INFO  | It takes a moment until task 4403dd2f-7386-487f-9220-a5091098449d (rabbitmq) has been started and output is visible here. 2026-03-31 02:40:46.566336 | orchestrator | 2026-03-31 02:40:46.566487 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-31 02:40:46.566511 | orchestrator | 2026-03-31 02:40:46.566527 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-31 02:40:46.566542 | orchestrator | Tuesday 31 March 2026 02:40:37 +0000 (0:00:00.175) 0:00:00.175 ********* 2026-03-31 02:40:46.566557 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:40:46.566575 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:40:46.566589 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:40:46.566603 | orchestrator | 2026-03-31 02:40:46.566618 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-31 02:40:46.566632 | orchestrator | Tuesday 31 March 2026 02:40:37 +0000 (0:00:00.315) 0:00:00.490 ********* 2026-03-31 02:40:46.566648 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-03-31 02:40:46.566664 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-03-31 02:40:46.566680 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-03-31 02:40:46.566697 | orchestrator | 2026-03-31 02:40:46.566714 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-03-31 02:40:46.566729 | orchestrator | 2026-03-31 02:40:46.566743 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-31 02:40:46.566758 | orchestrator | Tuesday 31 March 2026 02:40:38 +0000 (0:00:00.565) 0:00:01.056 ********* 2026-03-31 02:40:46.566790 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:40:46.566808 | orchestrator | 2026-03-31 02:40:46.566824 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-31 02:40:46.566839 | orchestrator | Tuesday 31 March 2026 02:40:38 +0000 (0:00:00.514) 0:00:01.570 ********* 2026-03-31 02:40:46.566856 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:40:46.566872 | orchestrator | 2026-03-31 02:40:46.566887 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-03-31 02:40:46.566910 | orchestrator | Tuesday 31 March 2026 02:40:39 +0000 (0:00:01.005) 0:00:02.576 ********* 2026-03-31 02:40:46.566926 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:40:46.566944 | orchestrator | 2026-03-31 02:40:46.566959 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-03-31 02:40:46.566974 | orchestrator | Tuesday 31 March 2026 02:40:40 +0000 (0:00:00.396) 0:00:02.973 ********* 2026-03-31 02:40:46.566989 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:40:46.567006 | orchestrator | 2026-03-31 02:40:46.567022 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-03-31 02:40:46.567038 | orchestrator | Tuesday 31 March 2026 02:40:40 +0000 (0:00:00.397) 0:00:03.370 ********* 2026-03-31 02:40:46.567053 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:40:46.567069 | orchestrator | 2026-03-31 02:40:46.567085 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-03-31 02:40:46.567099 | orchestrator | Tuesday 31 March 2026 02:40:40 +0000 (0:00:00.374) 0:00:03.745 ********* 2026-03-31 02:40:46.567114 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:40:46.567127 | orchestrator | 2026-03-31 02:40:46.567141 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-31 02:40:46.567156 | orchestrator | Tuesday 31 March 2026 02:40:41 +0000 (0:00:00.572) 0:00:04.318 ********* 2026-03-31 02:40:46.567199 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:40:46.567289 | orchestrator | 2026-03-31 02:40:46.567308 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-31 02:40:46.567324 | orchestrator | Tuesday 31 March 2026 02:40:42 +0000 (0:00:00.911) 0:00:05.230 ********* 2026-03-31 02:40:46.567339 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:40:46.567354 | orchestrator | 2026-03-31 02:40:46.567370 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-03-31 02:40:46.567386 | orchestrator | Tuesday 31 March 2026 02:40:43 +0000 (0:00:00.828) 0:00:06.059 ********* 2026-03-31 02:40:46.567401 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:40:46.567416 | orchestrator | 2026-03-31 02:40:46.567430 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-03-31 02:40:46.567445 | orchestrator | Tuesday 31 March 2026 02:40:43 +0000 (0:00:00.385) 0:00:06.444 ********* 2026-03-31 02:40:46.567461 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:40:46.567476 | orchestrator | 2026-03-31 02:40:46.567492 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-03-31 02:40:46.567508 | orchestrator | Tuesday 31 March 2026 02:40:44 +0000 (0:00:00.365) 0:00:06.809 ********* 2026-03-31 02:40:46.567561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-31 02:40:46.567584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-31 02:40:46.567602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-31 02:40:46.567633 | orchestrator | 2026-03-31 02:40:46.567690 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-03-31 02:40:46.567709 | orchestrator | Tuesday 31 March 2026 02:40:44 +0000 (0:00:00.859) 0:00:07.669 ********* 2026-03-31 02:40:46.567726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-31 02:40:46.567757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-31 02:41:05.439417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-31 02:41:05.439558 | orchestrator | 2026-03-31 02:41:05.439577 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-03-31 02:41:05.439589 | orchestrator | Tuesday 31 March 2026 02:40:46 +0000 (0:00:01.646) 0:00:09.315 ********* 2026-03-31 02:41:05.440327 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-31 02:41:05.440362 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-31 02:41:05.440374 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-31 02:41:05.440384 | orchestrator | 2026-03-31 02:41:05.440394 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-03-31 02:41:05.440404 | orchestrator | Tuesday 31 March 2026 02:40:48 +0000 (0:00:01.463) 0:00:10.779 ********* 2026-03-31 02:41:05.440427 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-31 02:41:05.440438 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-31 02:41:05.440448 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-31 02:41:05.440458 | orchestrator | 2026-03-31 02:41:05.440467 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-03-31 02:41:05.440477 | orchestrator | Tuesday 31 March 2026 02:40:49 +0000 (0:00:01.735) 0:00:12.514 ********* 2026-03-31 02:41:05.440486 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-31 02:41:05.440496 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-31 02:41:05.440505 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-31 02:41:05.440515 | orchestrator | 2026-03-31 02:41:05.440524 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-03-31 02:41:05.440533 | orchestrator | Tuesday 31 March 2026 02:40:51 +0000 (0:00:01.377) 0:00:13.891 ********* 2026-03-31 02:41:05.440543 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-31 02:41:05.440552 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-31 02:41:05.440562 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-31 02:41:05.440571 | orchestrator | 2026-03-31 02:41:05.440580 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-03-31 02:41:05.440590 | orchestrator | Tuesday 31 March 2026 02:40:52 +0000 (0:00:01.692) 0:00:15.584 ********* 2026-03-31 02:41:05.440599 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-31 02:41:05.440608 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-31 02:41:05.440618 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-31 02:41:05.440627 | orchestrator | 2026-03-31 02:41:05.440637 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-03-31 02:41:05.440647 | orchestrator | Tuesday 31 March 2026 02:40:54 +0000 (0:00:01.429) 0:00:17.014 ********* 2026-03-31 02:41:05.440657 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-31 02:41:05.440666 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-31 02:41:05.440675 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-31 02:41:05.440685 | orchestrator | 2026-03-31 02:41:05.440694 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-31 02:41:05.440704 | orchestrator | Tuesday 31 March 2026 02:40:55 +0000 (0:00:01.404) 0:00:18.418 ********* 2026-03-31 02:41:05.440714 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:41:05.440725 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:41:05.440755 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:41:05.440781 | orchestrator | 2026-03-31 02:41:05.440797 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-03-31 02:41:05.440813 | orchestrator | Tuesday 31 March 2026 02:40:56 +0000 (0:00:00.390) 0:00:18.809 ********* 2026-03-31 02:41:05.440832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-31 02:41:05.440857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-31 02:41:05.440876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-31 02:41:05.440892 | orchestrator | 2026-03-31 02:41:05.440909 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-03-31 02:41:05.440927 | orchestrator | Tuesday 31 March 2026 02:40:57 +0000 (0:00:01.252) 0:00:20.062 ********* 2026-03-31 02:41:05.440944 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:41:05.440960 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:41:05.440976 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:41:05.440993 | orchestrator | 2026-03-31 02:41:05.441011 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-03-31 02:41:05.441040 | orchestrator | Tuesday 31 March 2026 02:40:58 +0000 (0:00:00.823) 0:00:20.885 ********* 2026-03-31 02:41:05.441057 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:41:05.441069 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:41:05.441079 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:41:05.441089 | orchestrator | 2026-03-31 02:41:05.441098 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-03-31 02:41:05.441117 | orchestrator | Tuesday 31 March 2026 02:41:05 +0000 (0:00:07.304) 0:00:28.190 ********* 2026-03-31 02:42:41.407564 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:42:41.407661 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:42:41.407673 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:42:41.407682 | orchestrator | 2026-03-31 02:42:41.407692 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-31 02:42:41.407701 | orchestrator | 2026-03-31 02:42:41.407709 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-31 02:42:41.407718 | orchestrator | Tuesday 31 March 2026 02:41:05 +0000 (0:00:00.502) 0:00:28.692 ********* 2026-03-31 02:42:41.407726 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:42:41.407735 | orchestrator | 2026-03-31 02:42:41.407743 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-31 02:42:41.407751 | orchestrator | Tuesday 31 March 2026 02:41:06 +0000 (0:00:00.654) 0:00:29.346 ********* 2026-03-31 02:42:41.407759 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:42:41.407767 | orchestrator | 2026-03-31 02:42:41.407775 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-31 02:42:41.407783 | orchestrator | Tuesday 31 March 2026 02:41:06 +0000 (0:00:00.242) 0:00:29.588 ********* 2026-03-31 02:42:41.407790 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:42:41.407798 | orchestrator | 2026-03-31 02:42:41.407806 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-31 02:42:41.407814 | orchestrator | Tuesday 31 March 2026 02:41:08 +0000 (0:00:01.691) 0:00:31.280 ********* 2026-03-31 02:42:41.407822 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:42:41.407830 | orchestrator | 2026-03-31 02:42:41.407838 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-31 02:42:41.407846 | orchestrator | 2026-03-31 02:42:41.407854 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-31 02:42:41.407862 | orchestrator | Tuesday 31 March 2026 02:42:02 +0000 (0:00:54.187) 0:01:25.468 ********* 2026-03-31 02:42:41.407869 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:42:41.407877 | orchestrator | 2026-03-31 02:42:41.407885 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-31 02:42:41.407893 | orchestrator | Tuesday 31 March 2026 02:42:03 +0000 (0:00:00.578) 0:01:26.046 ********* 2026-03-31 02:42:41.407901 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:42:41.407909 | orchestrator | 2026-03-31 02:42:41.407916 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-31 02:42:41.407924 | orchestrator | Tuesday 31 March 2026 02:42:03 +0000 (0:00:00.271) 0:01:26.317 ********* 2026-03-31 02:42:41.407932 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:42:41.407940 | orchestrator | 2026-03-31 02:42:41.407948 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-31 02:42:41.407969 | orchestrator | Tuesday 31 March 2026 02:42:05 +0000 (0:00:01.623) 0:01:27.941 ********* 2026-03-31 02:42:41.407977 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:42:41.407985 | orchestrator | 2026-03-31 02:42:41.407993 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-31 02:42:41.408001 | orchestrator | 2026-03-31 02:42:41.408009 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-31 02:42:41.408017 | orchestrator | Tuesday 31 March 2026 02:42:19 +0000 (0:00:14.498) 0:01:42.440 ********* 2026-03-31 02:42:41.408025 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:42:41.408033 | orchestrator | 2026-03-31 02:42:41.408061 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-31 02:42:41.408069 | orchestrator | Tuesday 31 March 2026 02:42:20 +0000 (0:00:00.792) 0:01:43.232 ********* 2026-03-31 02:42:41.408077 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:42:41.408085 | orchestrator | 2026-03-31 02:42:41.408093 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-31 02:42:41.408100 | orchestrator | Tuesday 31 March 2026 02:42:20 +0000 (0:00:00.242) 0:01:43.475 ********* 2026-03-31 02:42:41.408109 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:42:41.408123 | orchestrator | 2026-03-31 02:42:41.408136 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-31 02:42:41.408150 | orchestrator | Tuesday 31 March 2026 02:42:27 +0000 (0:00:06.654) 0:01:50.129 ********* 2026-03-31 02:42:41.408165 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:42:41.408184 | orchestrator | 2026-03-31 02:42:41.408198 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-03-31 02:42:41.408211 | orchestrator | 2026-03-31 02:42:41.408223 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-03-31 02:42:41.408236 | orchestrator | Tuesday 31 March 2026 02:42:38 +0000 (0:00:10.807) 0:02:00.936 ********* 2026-03-31 02:42:41.408248 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:42:41.408260 | orchestrator | 2026-03-31 02:42:41.408273 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-31 02:42:41.408286 | orchestrator | Tuesday 31 March 2026 02:42:38 +0000 (0:00:00.523) 0:02:01.459 ********* 2026-03-31 02:42:41.408325 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-31 02:42:41.408340 | orchestrator | enable_outward_rabbitmq_True 2026-03-31 02:42:41.408355 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-31 02:42:41.408369 | orchestrator | outward_rabbitmq_restart 2026-03-31 02:42:41.408384 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:42:41.408399 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:42:41.408409 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:42:41.408417 | orchestrator | 2026-03-31 02:42:41.408425 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-03-31 02:42:41.408432 | orchestrator | skipping: no hosts matched 2026-03-31 02:42:41.408440 | orchestrator | 2026-03-31 02:42:41.408447 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-03-31 02:42:41.408455 | orchestrator | skipping: no hosts matched 2026-03-31 02:42:41.408463 | orchestrator | 2026-03-31 02:42:41.408470 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-03-31 02:42:41.408478 | orchestrator | skipping: no hosts matched 2026-03-31 02:42:41.408486 | orchestrator | 2026-03-31 02:42:41.408493 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 02:42:41.408520 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-31 02:42:41.408531 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 02:42:41.408539 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 02:42:41.408546 | orchestrator | 2026-03-31 02:42:41.408554 | orchestrator | 2026-03-31 02:42:41.408562 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 02:42:41.408570 | orchestrator | Tuesday 31 March 2026 02:42:41 +0000 (0:00:02.325) 0:02:03.785 ********* 2026-03-31 02:42:41.408577 | orchestrator | =============================================================================== 2026-03-31 02:42:41.408585 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 79.49s 2026-03-31 02:42:41.408593 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 9.97s 2026-03-31 02:42:41.408610 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.30s 2026-03-31 02:42:41.408618 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.33s 2026-03-31 02:42:41.408626 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.03s 2026-03-31 02:42:41.408633 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.74s 2026-03-31 02:42:41.408641 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.69s 2026-03-31 02:42:41.408649 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.65s 2026-03-31 02:42:41.408656 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.46s 2026-03-31 02:42:41.408664 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.43s 2026-03-31 02:42:41.408671 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.40s 2026-03-31 02:42:41.408679 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.38s 2026-03-31 02:42:41.408687 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.25s 2026-03-31 02:42:41.408694 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.01s 2026-03-31 02:42:41.408709 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.91s 2026-03-31 02:42:41.408717 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.86s 2026-03-31 02:42:41.408725 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.83s 2026-03-31 02:42:41.408732 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.82s 2026-03-31 02:42:41.408740 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.76s 2026-03-31 02:42:41.408748 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 0.57s 2026-03-31 02:42:43.911464 | orchestrator | 2026-03-31 02:42:43 | INFO  | Task 051bdfec-6122-4b04-829c-bcdad9ed67cc (openvswitch) was prepared for execution. 2026-03-31 02:42:43.911594 | orchestrator | 2026-03-31 02:42:43 | INFO  | It takes a moment until task 051bdfec-6122-4b04-829c-bcdad9ed67cc (openvswitch) has been started and output is visible here. 2026-03-31 02:42:56.751683 | orchestrator | 2026-03-31 02:42:56.751808 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-31 02:42:56.751828 | orchestrator | 2026-03-31 02:42:56.751843 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-31 02:42:56.751858 | orchestrator | Tuesday 31 March 2026 02:42:48 +0000 (0:00:00.262) 0:00:00.262 ********* 2026-03-31 02:42:56.751873 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:42:56.751888 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:42:56.751903 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:42:56.751917 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:42:56.751931 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:42:56.751945 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:42:56.751960 | orchestrator | 2026-03-31 02:42:56.751974 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-31 02:42:56.751989 | orchestrator | Tuesday 31 March 2026 02:42:48 +0000 (0:00:00.691) 0:00:00.953 ********* 2026-03-31 02:42:56.752003 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-31 02:42:56.752018 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-31 02:42:56.752032 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-31 02:42:56.752046 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-31 02:42:56.752060 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-31 02:42:56.752075 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-31 02:42:56.752089 | orchestrator | 2026-03-31 02:42:56.752134 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-03-31 02:42:56.752149 | orchestrator | 2026-03-31 02:42:56.752164 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-03-31 02:42:56.752178 | orchestrator | Tuesday 31 March 2026 02:42:49 +0000 (0:00:00.611) 0:00:01.565 ********* 2026-03-31 02:42:56.752193 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 02:42:56.752209 | orchestrator | 2026-03-31 02:42:56.752223 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-31 02:42:56.752238 | orchestrator | Tuesday 31 March 2026 02:42:50 +0000 (0:00:01.168) 0:00:02.733 ********* 2026-03-31 02:42:56.752252 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-31 02:42:56.752267 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-31 02:42:56.752281 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-31 02:42:56.752295 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-31 02:42:56.752335 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-31 02:42:56.752350 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-31 02:42:56.752363 | orchestrator | 2026-03-31 02:42:56.752377 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-31 02:42:56.752392 | orchestrator | Tuesday 31 March 2026 02:42:51 +0000 (0:00:01.180) 0:00:03.913 ********* 2026-03-31 02:42:56.752406 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-31 02:42:56.752420 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-31 02:42:56.752435 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-31 02:42:56.752448 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-31 02:42:56.752462 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-31 02:42:56.752477 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-31 02:42:56.752490 | orchestrator | 2026-03-31 02:42:56.752505 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-31 02:42:56.752518 | orchestrator | Tuesday 31 March 2026 02:42:53 +0000 (0:00:01.471) 0:00:05.385 ********* 2026-03-31 02:42:56.752532 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-03-31 02:42:56.752547 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:42:56.752562 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-03-31 02:42:56.752576 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:42:56.752590 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-03-31 02:42:56.752604 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:42:56.752617 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-03-31 02:42:56.752632 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:42:56.752646 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-03-31 02:42:56.752660 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:42:56.752674 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-03-31 02:42:56.752688 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:42:56.752702 | orchestrator | 2026-03-31 02:42:56.752716 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-03-31 02:42:56.752729 | orchestrator | Tuesday 31 March 2026 02:42:54 +0000 (0:00:01.225) 0:00:06.610 ********* 2026-03-31 02:42:56.752741 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:42:56.752754 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:42:56.752767 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:42:56.752780 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:42:56.752792 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:42:56.752806 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:42:56.752818 | orchestrator | 2026-03-31 02:42:56.752831 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-03-31 02:42:56.752854 | orchestrator | Tuesday 31 March 2026 02:42:55 +0000 (0:00:00.794) 0:00:07.405 ********* 2026-03-31 02:42:56.752889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-31 02:42:56.752907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-31 02:42:56.752921 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-31 02:42:56.753021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-31 02:42:56.753046 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-31 02:42:56.753070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-31 02:42:59.208086 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-31 02:42:59.208192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-31 02:42:59.208209 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-31 02:42:59.208222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-31 02:42:59.208282 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-31 02:42:59.208384 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-31 02:42:59.208400 | orchestrator | 2026-03-31 02:42:59.208414 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-03-31 02:42:59.208427 | orchestrator | Tuesday 31 March 2026 02:42:56 +0000 (0:00:01.395) 0:00:08.800 ********* 2026-03-31 02:42:59.208439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-31 02:42:59.208451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-31 02:42:59.208462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-31 02:42:59.208474 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-31 02:42:59.208499 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-31 02:42:59.208520 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-31 02:43:02.079356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-31 02:43:02.079437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-31 02:43:02.079443 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-31 02:43:02.079459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-31 02:43:02.079475 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-31 02:43:02.079490 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-31 02:43:02.079494 | orchestrator | 2026-03-31 02:43:02.079499 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-03-31 02:43:02.079505 | orchestrator | Tuesday 31 March 2026 02:42:59 +0000 (0:00:02.448) 0:00:11.249 ********* 2026-03-31 02:43:02.079509 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:43:02.079514 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:43:02.079518 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:43:02.079521 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:43:02.079527 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:43:02.079533 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:43:02.079539 | orchestrator | 2026-03-31 02:43:02.079545 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-03-31 02:43:02.079551 | orchestrator | Tuesday 31 March 2026 02:43:00 +0000 (0:00:01.048) 0:00:12.298 ********* 2026-03-31 02:43:02.079557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-31 02:43:02.079565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-31 02:43:02.079582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-31 02:43:02.079589 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-31 02:43:02.079605 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-31 02:43:28.174256 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-31 02:43:28.174355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-31 02:43:28.174362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-31 02:43:28.174393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-31 02:43:28.174397 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-31 02:43:28.174412 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-31 02:43:28.174416 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-31 02:43:28.174420 | orchestrator | 2026-03-31 02:43:28.174425 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-31 02:43:28.174430 | orchestrator | Tuesday 31 March 2026 02:43:02 +0000 (0:00:01.866) 0:00:14.164 ********* 2026-03-31 02:43:28.174434 | orchestrator | 2026-03-31 02:43:28.174438 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-31 02:43:28.174442 | orchestrator | Tuesday 31 March 2026 02:43:02 +0000 (0:00:00.369) 0:00:14.534 ********* 2026-03-31 02:43:28.174449 | orchestrator | 2026-03-31 02:43:28.174453 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-31 02:43:28.174456 | orchestrator | Tuesday 31 March 2026 02:43:02 +0000 (0:00:00.135) 0:00:14.669 ********* 2026-03-31 02:43:28.174460 | orchestrator | 2026-03-31 02:43:28.174464 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-31 02:43:28.174468 | orchestrator | Tuesday 31 March 2026 02:43:02 +0000 (0:00:00.128) 0:00:14.797 ********* 2026-03-31 02:43:28.174471 | orchestrator | 2026-03-31 02:43:28.174475 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-31 02:43:28.174479 | orchestrator | Tuesday 31 March 2026 02:43:02 +0000 (0:00:00.132) 0:00:14.930 ********* 2026-03-31 02:43:28.174483 | orchestrator | 2026-03-31 02:43:28.174486 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-31 02:43:28.174490 | orchestrator | Tuesday 31 March 2026 02:43:03 +0000 (0:00:00.134) 0:00:15.064 ********* 2026-03-31 02:43:28.174494 | orchestrator | 2026-03-31 02:43:28.174498 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-03-31 02:43:28.174501 | orchestrator | Tuesday 31 March 2026 02:43:03 +0000 (0:00:00.138) 0:00:15.203 ********* 2026-03-31 02:43:28.174505 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:43:28.174510 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:43:28.174514 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:43:28.174518 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:43:28.174522 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:43:28.174525 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:43:28.174529 | orchestrator | 2026-03-31 02:43:28.174533 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-03-31 02:43:28.174537 | orchestrator | Tuesday 31 March 2026 02:43:12 +0000 (0:00:09.034) 0:00:24.237 ********* 2026-03-31 02:43:28.174541 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:43:28.174549 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:43:28.174553 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:43:28.174557 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:43:28.174561 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:43:28.174565 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:43:28.174569 | orchestrator | 2026-03-31 02:43:28.174573 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-31 02:43:28.174577 | orchestrator | Tuesday 31 March 2026 02:43:13 +0000 (0:00:01.092) 0:00:25.330 ********* 2026-03-31 02:43:28.174580 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:43:28.174584 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:43:28.174588 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:43:28.174592 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:43:28.174595 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:43:28.174599 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:43:28.174603 | orchestrator | 2026-03-31 02:43:28.174607 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-03-31 02:43:28.174610 | orchestrator | Tuesday 31 March 2026 02:43:21 +0000 (0:00:08.229) 0:00:33.559 ********* 2026-03-31 02:43:28.174614 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-03-31 02:43:28.174619 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-03-31 02:43:28.174622 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-03-31 02:43:28.174626 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-03-31 02:43:28.174630 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-03-31 02:43:28.174634 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-03-31 02:43:28.174637 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-03-31 02:43:28.174647 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-03-31 02:43:41.668428 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-03-31 02:43:41.668586 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-03-31 02:43:41.668684 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-03-31 02:43:41.668702 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-03-31 02:43:41.668714 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-31 02:43:41.668725 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-31 02:43:41.668735 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-31 02:43:41.668746 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-31 02:43:41.668757 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-31 02:43:41.668768 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-31 02:43:41.668779 | orchestrator | 2026-03-31 02:43:41.668791 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-03-31 02:43:41.668803 | orchestrator | Tuesday 31 March 2026 02:43:28 +0000 (0:00:06.561) 0:00:40.120 ********* 2026-03-31 02:43:41.668815 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-03-31 02:43:41.668827 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:43:41.668839 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-03-31 02:43:41.668850 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:43:41.668860 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-03-31 02:43:41.668877 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:43:41.668896 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-03-31 02:43:41.668915 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-03-31 02:43:41.668933 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-03-31 02:43:41.668952 | orchestrator | 2026-03-31 02:43:41.668971 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-03-31 02:43:41.668992 | orchestrator | Tuesday 31 March 2026 02:43:30 +0000 (0:00:02.527) 0:00:42.648 ********* 2026-03-31 02:43:41.669011 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-03-31 02:43:41.669031 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:43:41.669051 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-03-31 02:43:41.669070 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:43:41.669088 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-03-31 02:43:41.669101 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:43:41.669113 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-03-31 02:43:41.669126 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-03-31 02:43:41.669153 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-03-31 02:43:41.669167 | orchestrator | 2026-03-31 02:43:41.669179 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-31 02:43:41.669191 | orchestrator | Tuesday 31 March 2026 02:43:33 +0000 (0:00:03.294) 0:00:45.942 ********* 2026-03-31 02:43:41.669204 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:43:41.669216 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:43:41.669255 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:43:41.669268 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:43:41.669281 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:43:41.669294 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:43:41.669306 | orchestrator | 2026-03-31 02:43:41.669318 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 02:43:41.669330 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-31 02:43:41.669368 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-31 02:43:41.669379 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-31 02:43:41.669390 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-31 02:43:41.669401 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-31 02:43:41.669412 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-31 02:43:41.669423 | orchestrator | 2026-03-31 02:43:41.669434 | orchestrator | 2026-03-31 02:43:41.669445 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 02:43:41.669455 | orchestrator | Tuesday 31 March 2026 02:43:41 +0000 (0:00:07.234) 0:00:53.177 ********* 2026-03-31 02:43:41.669485 | orchestrator | =============================================================================== 2026-03-31 02:43:41.669496 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 15.46s 2026-03-31 02:43:41.669507 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 9.03s 2026-03-31 02:43:41.669518 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.56s 2026-03-31 02:43:41.669528 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.29s 2026-03-31 02:43:41.669539 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.53s 2026-03-31 02:43:41.669550 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.45s 2026-03-31 02:43:41.669560 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 1.87s 2026-03-31 02:43:41.669571 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.47s 2026-03-31 02:43:41.669582 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.40s 2026-03-31 02:43:41.669593 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.23s 2026-03-31 02:43:41.669603 | orchestrator | module-load : Load modules ---------------------------------------------- 1.18s 2026-03-31 02:43:41.669614 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.17s 2026-03-31 02:43:41.669624 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.09s 2026-03-31 02:43:41.669635 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.05s 2026-03-31 02:43:41.669645 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.04s 2026-03-31 02:43:41.669656 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.79s 2026-03-31 02:43:41.669667 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.69s 2026-03-31 02:43:41.669677 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.61s 2026-03-31 02:43:44.324527 | orchestrator | 2026-03-31 02:43:44 | INFO  | Task 7a7d8f28-6cdc-498b-85c0-f08d69168e53 (ovn) was prepared for execution. 2026-03-31 02:43:44.324612 | orchestrator | 2026-03-31 02:43:44 | INFO  | It takes a moment until task 7a7d8f28-6cdc-498b-85c0-f08d69168e53 (ovn) has been started and output is visible here. 2026-03-31 02:43:55.167071 | orchestrator | 2026-03-31 02:43:55.167179 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-31 02:43:55.167192 | orchestrator | 2026-03-31 02:43:55.167199 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-31 02:43:55.167205 | orchestrator | Tuesday 31 March 2026 02:43:48 +0000 (0:00:00.167) 0:00:00.167 ********* 2026-03-31 02:43:55.167212 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:43:55.167219 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:43:55.167225 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:43:55.167230 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:43:55.167236 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:43:55.167242 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:43:55.167248 | orchestrator | 2026-03-31 02:43:55.167255 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-31 02:43:55.167261 | orchestrator | Tuesday 31 March 2026 02:43:49 +0000 (0:00:00.728) 0:00:00.896 ********* 2026-03-31 02:43:55.167282 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-03-31 02:43:55.167289 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-03-31 02:43:55.167295 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-03-31 02:43:55.167301 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-03-31 02:43:55.167306 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-03-31 02:43:55.167312 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-03-31 02:43:55.167317 | orchestrator | 2026-03-31 02:43:55.167324 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-03-31 02:43:55.167330 | orchestrator | 2026-03-31 02:43:55.167336 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-03-31 02:43:55.167417 | orchestrator | Tuesday 31 March 2026 02:43:50 +0000 (0:00:00.836) 0:00:01.732 ********* 2026-03-31 02:43:55.167426 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:43:55.167434 | orchestrator | 2026-03-31 02:43:55.167441 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-03-31 02:43:55.167446 | orchestrator | Tuesday 31 March 2026 02:43:51 +0000 (0:00:01.162) 0:00:02.894 ********* 2026-03-31 02:43:55.167455 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:43:55.167465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:43:55.167472 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:43:55.167476 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:43:55.167499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:43:55.167522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:43:55.167528 | orchestrator | 2026-03-31 02:43:55.167533 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-03-31 02:43:55.167539 | orchestrator | Tuesday 31 March 2026 02:43:52 +0000 (0:00:01.202) 0:00:04.097 ********* 2026-03-31 02:43:55.167550 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:43:55.167556 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:43:55.167561 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:43:55.167567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:43:55.167573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:43:55.167579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:43:55.167590 | orchestrator | 2026-03-31 02:43:55.167596 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-03-31 02:43:55.167602 | orchestrator | Tuesday 31 March 2026 02:43:53 +0000 (0:00:01.529) 0:00:05.626 ********* 2026-03-31 02:43:55.167607 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:43:55.167613 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:43:55.167625 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:44:21.095063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:44:21.095161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:44:21.095172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:44:21.095181 | orchestrator | 2026-03-31 02:44:21.095189 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-03-31 02:44:21.095198 | orchestrator | Tuesday 31 March 2026 02:43:55 +0000 (0:00:01.170) 0:00:06.797 ********* 2026-03-31 02:44:21.095206 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:44:21.095214 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:44:21.095241 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:44:21.095249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:44:21.095257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:44:21.095277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:44:21.095285 | orchestrator | 2026-03-31 02:44:21.095293 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-03-31 02:44:21.095300 | orchestrator | Tuesday 31 March 2026 02:43:56 +0000 (0:00:01.538) 0:00:08.335 ********* 2026-03-31 02:44:21.095312 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:44:21.095320 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:44:21.095328 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:44:21.095335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:44:21.095348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:44:21.095356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:44:21.095437 | orchestrator | 2026-03-31 02:44:21.095453 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-03-31 02:44:21.095461 | orchestrator | Tuesday 31 March 2026 02:43:58 +0000 (0:00:01.464) 0:00:09.800 ********* 2026-03-31 02:44:21.095469 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:44:21.095478 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:44:21.095485 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:44:21.095492 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:44:21.095499 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:44:21.095506 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:44:21.095513 | orchestrator | 2026-03-31 02:44:21.095521 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-03-31 02:44:21.095528 | orchestrator | Tuesday 31 March 2026 02:44:00 +0000 (0:00:02.565) 0:00:12.365 ********* 2026-03-31 02:44:21.095535 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-03-31 02:44:21.095543 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-03-31 02:44:21.095550 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-03-31 02:44:21.095557 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-03-31 02:44:21.095564 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-03-31 02:44:21.095571 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-03-31 02:44:21.095585 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-31 02:44:57.376692 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-31 02:44:57.376838 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-31 02:44:57.376886 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-31 02:44:57.376908 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-31 02:44:57.376927 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-31 02:44:57.376947 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-31 02:44:57.376968 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-31 02:44:57.377051 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-31 02:44:57.377067 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-31 02:44:57.377077 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-31 02:44:57.377088 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-31 02:44:57.377100 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-31 02:44:57.377112 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-31 02:44:57.377122 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-31 02:44:57.377133 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-31 02:44:57.377145 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-31 02:44:57.377156 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-31 02:44:57.377167 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-31 02:44:57.377177 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-31 02:44:57.377189 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-31 02:44:57.377201 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-31 02:44:57.377213 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-31 02:44:57.377225 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-31 02:44:57.377238 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-31 02:44:57.377251 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-31 02:44:57.377263 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-31 02:44:57.377275 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-31 02:44:57.377287 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-31 02:44:57.377299 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-31 02:44:57.377311 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-31 02:44:57.377323 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-31 02:44:57.377336 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-31 02:44:57.377348 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-31 02:44:57.377359 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-31 02:44:57.377370 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-31 02:44:57.377380 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-03-31 02:44:57.377451 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-03-31 02:44:57.377465 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-03-31 02:44:57.377483 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-03-31 02:44:57.377494 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-03-31 02:44:57.377505 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-31 02:44:57.377516 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-31 02:44:57.377526 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-03-31 02:44:57.377537 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-31 02:44:57.377548 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-31 02:44:57.377559 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-31 02:44:57.377570 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-31 02:44:57.377580 | orchestrator | 2026-03-31 02:44:57.377592 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-31 02:44:57.377603 | orchestrator | Tuesday 31 March 2026 02:44:20 +0000 (0:00:19.652) 0:00:32.017 ********* 2026-03-31 02:44:57.377614 | orchestrator | 2026-03-31 02:44:57.377625 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-31 02:44:57.377635 | orchestrator | Tuesday 31 March 2026 02:44:20 +0000 (0:00:00.347) 0:00:32.365 ********* 2026-03-31 02:44:57.377646 | orchestrator | 2026-03-31 02:44:57.377656 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-31 02:44:57.377667 | orchestrator | Tuesday 31 March 2026 02:44:20 +0000 (0:00:00.074) 0:00:32.439 ********* 2026-03-31 02:44:57.377677 | orchestrator | 2026-03-31 02:44:57.377688 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-31 02:44:57.377699 | orchestrator | Tuesday 31 March 2026 02:44:20 +0000 (0:00:00.069) 0:00:32.509 ********* 2026-03-31 02:44:57.377709 | orchestrator | 2026-03-31 02:44:57.377720 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-31 02:44:57.377731 | orchestrator | Tuesday 31 March 2026 02:44:20 +0000 (0:00:00.068) 0:00:32.578 ********* 2026-03-31 02:44:57.377741 | orchestrator | 2026-03-31 02:44:57.377752 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-31 02:44:57.377762 | orchestrator | Tuesday 31 March 2026 02:44:21 +0000 (0:00:00.070) 0:00:32.648 ********* 2026-03-31 02:44:57.377773 | orchestrator | 2026-03-31 02:44:57.377784 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-03-31 02:44:57.377795 | orchestrator | Tuesday 31 March 2026 02:44:21 +0000 (0:00:00.073) 0:00:32.722 ********* 2026-03-31 02:44:57.377805 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:44:57.377817 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:44:57.377828 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:44:57.377838 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:44:57.377849 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:44:57.377859 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:44:57.377870 | orchestrator | 2026-03-31 02:44:57.377881 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-03-31 02:44:57.377894 | orchestrator | Tuesday 31 March 2026 02:44:22 +0000 (0:00:01.645) 0:00:34.368 ********* 2026-03-31 02:44:57.377923 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:44:57.377941 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:44:57.377959 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:44:57.377976 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:44:57.377992 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:44:57.378010 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:44:57.378109 | orchestrator | 2026-03-31 02:44:57.378122 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-03-31 02:44:57.378134 | orchestrator | 2026-03-31 02:44:57.378236 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-31 02:44:57.378261 | orchestrator | Tuesday 31 March 2026 02:44:55 +0000 (0:00:32.312) 0:01:06.680 ********* 2026-03-31 02:44:57.378280 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:44:57.378298 | orchestrator | 2026-03-31 02:44:57.378309 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-31 02:44:57.378320 | orchestrator | Tuesday 31 March 2026 02:44:55 +0000 (0:00:00.725) 0:01:07.406 ********* 2026-03-31 02:44:57.378331 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:44:57.378341 | orchestrator | 2026-03-31 02:44:57.378352 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-03-31 02:44:57.378363 | orchestrator | Tuesday 31 March 2026 02:44:56 +0000 (0:00:00.541) 0:01:07.947 ********* 2026-03-31 02:44:57.378373 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:44:57.378384 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:44:57.378452 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:44:57.378480 | orchestrator | 2026-03-31 02:44:57.378500 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-03-31 02:44:57.378534 | orchestrator | Tuesday 31 March 2026 02:44:57 +0000 (0:00:01.055) 0:01:09.003 ********* 2026-03-31 02:45:09.454142 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:45:09.454252 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:45:09.454267 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:45:09.454278 | orchestrator | 2026-03-31 02:45:09.454289 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-03-31 02:45:09.454315 | orchestrator | Tuesday 31 March 2026 02:44:57 +0000 (0:00:00.341) 0:01:09.345 ********* 2026-03-31 02:45:09.454325 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:45:09.454335 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:45:09.454345 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:45:09.454355 | orchestrator | 2026-03-31 02:45:09.454365 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-03-31 02:45:09.454375 | orchestrator | Tuesday 31 March 2026 02:44:58 +0000 (0:00:00.357) 0:01:09.702 ********* 2026-03-31 02:45:09.454385 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:45:09.454395 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:45:09.454435 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:45:09.454445 | orchestrator | 2026-03-31 02:45:09.454455 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-03-31 02:45:09.454465 | orchestrator | Tuesday 31 March 2026 02:44:58 +0000 (0:00:00.358) 0:01:10.060 ********* 2026-03-31 02:45:09.454474 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:45:09.454484 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:45:09.454494 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:45:09.454503 | orchestrator | 2026-03-31 02:45:09.454513 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-03-31 02:45:09.454523 | orchestrator | Tuesday 31 March 2026 02:44:58 +0000 (0:00:00.549) 0:01:10.610 ********* 2026-03-31 02:45:09.454532 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:45:09.454543 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:45:09.454553 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:45:09.454568 | orchestrator | 2026-03-31 02:45:09.454584 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-03-31 02:45:09.454627 | orchestrator | Tuesday 31 March 2026 02:44:59 +0000 (0:00:00.304) 0:01:10.914 ********* 2026-03-31 02:45:09.454646 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:45:09.454664 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:45:09.454681 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:45:09.454698 | orchestrator | 2026-03-31 02:45:09.454715 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-03-31 02:45:09.454727 | orchestrator | Tuesday 31 March 2026 02:44:59 +0000 (0:00:00.321) 0:01:11.236 ********* 2026-03-31 02:45:09.454738 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:45:09.454750 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:45:09.454761 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:45:09.454772 | orchestrator | 2026-03-31 02:45:09.454783 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-03-31 02:45:09.454794 | orchestrator | Tuesday 31 March 2026 02:44:59 +0000 (0:00:00.308) 0:01:11.545 ********* 2026-03-31 02:45:09.454805 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:45:09.454816 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:45:09.454832 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:45:09.454853 | orchestrator | 2026-03-31 02:45:09.454877 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-03-31 02:45:09.454893 | orchestrator | Tuesday 31 March 2026 02:45:00 +0000 (0:00:00.352) 0:01:11.897 ********* 2026-03-31 02:45:09.454909 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:45:09.454926 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:45:09.454942 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:45:09.454959 | orchestrator | 2026-03-31 02:45:09.454978 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-03-31 02:45:09.454995 | orchestrator | Tuesday 31 March 2026 02:45:00 +0000 (0:00:00.530) 0:01:12.428 ********* 2026-03-31 02:45:09.455012 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:45:09.455025 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:45:09.455035 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:45:09.455045 | orchestrator | 2026-03-31 02:45:09.455054 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-03-31 02:45:09.455064 | orchestrator | Tuesday 31 March 2026 02:45:01 +0000 (0:00:00.326) 0:01:12.755 ********* 2026-03-31 02:45:09.455073 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:45:09.455083 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:45:09.455092 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:45:09.455102 | orchestrator | 2026-03-31 02:45:09.455117 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-03-31 02:45:09.455132 | orchestrator | Tuesday 31 March 2026 02:45:01 +0000 (0:00:00.349) 0:01:13.105 ********* 2026-03-31 02:45:09.455159 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:45:09.455176 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:45:09.455191 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:45:09.455205 | orchestrator | 2026-03-31 02:45:09.455221 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-03-31 02:45:09.455238 | orchestrator | Tuesday 31 March 2026 02:45:01 +0000 (0:00:00.311) 0:01:13.416 ********* 2026-03-31 02:45:09.455253 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:45:09.455268 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:45:09.455284 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:45:09.455299 | orchestrator | 2026-03-31 02:45:09.455315 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-03-31 02:45:09.455332 | orchestrator | Tuesday 31 March 2026 02:45:02 +0000 (0:00:00.518) 0:01:13.934 ********* 2026-03-31 02:45:09.455349 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:45:09.455366 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:45:09.455382 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:45:09.455424 | orchestrator | 2026-03-31 02:45:09.455437 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-03-31 02:45:09.455460 | orchestrator | Tuesday 31 March 2026 02:45:02 +0000 (0:00:00.313) 0:01:14.248 ********* 2026-03-31 02:45:09.455470 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:45:09.455480 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:45:09.455489 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:45:09.455498 | orchestrator | 2026-03-31 02:45:09.455508 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-03-31 02:45:09.455518 | orchestrator | Tuesday 31 March 2026 02:45:02 +0000 (0:00:00.335) 0:01:14.583 ********* 2026-03-31 02:45:09.455552 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:45:09.455569 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:45:09.455586 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:45:09.455602 | orchestrator | 2026-03-31 02:45:09.455620 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-31 02:45:09.455649 | orchestrator | Tuesday 31 March 2026 02:45:03 +0000 (0:00:00.298) 0:01:14.882 ********* 2026-03-31 02:45:09.455668 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:45:09.455685 | orchestrator | 2026-03-31 02:45:09.455695 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-03-31 02:45:09.455705 | orchestrator | Tuesday 31 March 2026 02:45:04 +0000 (0:00:00.863) 0:01:15.746 ********* 2026-03-31 02:45:09.455714 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:45:09.455724 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:45:09.455733 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:45:09.455749 | orchestrator | 2026-03-31 02:45:09.455765 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-03-31 02:45:09.455781 | orchestrator | Tuesday 31 March 2026 02:45:04 +0000 (0:00:00.525) 0:01:16.271 ********* 2026-03-31 02:45:09.455797 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:45:09.455812 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:45:09.455828 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:45:09.455842 | orchestrator | 2026-03-31 02:45:09.455858 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-03-31 02:45:09.455872 | orchestrator | Tuesday 31 March 2026 02:45:05 +0000 (0:00:00.459) 0:01:16.731 ********* 2026-03-31 02:45:09.455889 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:45:09.455905 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:45:09.455922 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:45:09.455939 | orchestrator | 2026-03-31 02:45:09.455955 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-03-31 02:45:09.455971 | orchestrator | Tuesday 31 March 2026 02:45:05 +0000 (0:00:00.367) 0:01:17.098 ********* 2026-03-31 02:45:09.455988 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:45:09.456004 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:45:09.456020 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:45:09.456036 | orchestrator | 2026-03-31 02:45:09.456053 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-03-31 02:45:09.456069 | orchestrator | Tuesday 31 March 2026 02:45:06 +0000 (0:00:00.737) 0:01:17.836 ********* 2026-03-31 02:45:09.456084 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:45:09.456100 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:45:09.456116 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:45:09.456133 | orchestrator | 2026-03-31 02:45:09.456150 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-03-31 02:45:09.456167 | orchestrator | Tuesday 31 March 2026 02:45:06 +0000 (0:00:00.393) 0:01:18.229 ********* 2026-03-31 02:45:09.456183 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:45:09.456200 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:45:09.456217 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:45:09.456234 | orchestrator | 2026-03-31 02:45:09.456251 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-03-31 02:45:09.456268 | orchestrator | Tuesday 31 March 2026 02:45:06 +0000 (0:00:00.368) 0:01:18.598 ********* 2026-03-31 02:45:09.456304 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:45:09.456321 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:45:09.456338 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:45:09.456354 | orchestrator | 2026-03-31 02:45:09.456370 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-03-31 02:45:09.456387 | orchestrator | Tuesday 31 March 2026 02:45:07 +0000 (0:00:00.350) 0:01:18.949 ********* 2026-03-31 02:45:09.456475 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:45:09.456494 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:45:09.456510 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:45:09.456526 | orchestrator | 2026-03-31 02:45:09.456542 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-31 02:45:09.456558 | orchestrator | Tuesday 31 March 2026 02:45:07 +0000 (0:00:00.685) 0:01:19.634 ********* 2026-03-31 02:45:09.456579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:09.456600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:09.456617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:09.456664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:16.093925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:16.094092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:16.094112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:16.094124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:16.094157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:16.094171 | orchestrator | 2026-03-31 02:45:16.094184 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-31 02:45:16.094196 | orchestrator | Tuesday 31 March 2026 02:45:09 +0000 (0:00:01.450) 0:01:21.084 ********* 2026-03-31 02:45:16.094210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:16.094223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:16.094235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:16.094246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:16.094292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:16.094307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:16.094318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:16.094329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:16.094349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:16.094360 | orchestrator | 2026-03-31 02:45:16.094371 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-31 02:45:16.094382 | orchestrator | Tuesday 31 March 2026 02:45:13 +0000 (0:00:04.145) 0:01:25.230 ********* 2026-03-31 02:45:16.094393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:16.094429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:16.094441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:16.094453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:16.094467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:16.094494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:45.249337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:45.249526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:45.249552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:45.249568 | orchestrator | 2026-03-31 02:45:45.249584 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-31 02:45:45.249601 | orchestrator | Tuesday 31 March 2026 02:45:15 +0000 (0:00:02.094) 0:01:27.325 ********* 2026-03-31 02:45:45.249617 | orchestrator | 2026-03-31 02:45:45.249632 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-31 02:45:45.249648 | orchestrator | Tuesday 31 March 2026 02:45:15 +0000 (0:00:00.066) 0:01:27.391 ********* 2026-03-31 02:45:45.249663 | orchestrator | 2026-03-31 02:45:45.249677 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-31 02:45:45.249692 | orchestrator | Tuesday 31 March 2026 02:45:16 +0000 (0:00:00.263) 0:01:27.654 ********* 2026-03-31 02:45:45.249707 | orchestrator | 2026-03-31 02:45:45.249720 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-31 02:45:45.249729 | orchestrator | Tuesday 31 March 2026 02:45:16 +0000 (0:00:00.065) 0:01:27.720 ********* 2026-03-31 02:45:45.249738 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:45:45.249748 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:45:45.249756 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:45:45.249765 | orchestrator | 2026-03-31 02:45:45.249778 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-31 02:45:45.249794 | orchestrator | Tuesday 31 March 2026 02:45:23 +0000 (0:00:07.662) 0:01:35.383 ********* 2026-03-31 02:45:45.249807 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:45:45.249821 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:45:45.249831 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:45:45.249841 | orchestrator | 2026-03-31 02:45:45.249851 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-31 02:45:45.249861 | orchestrator | Tuesday 31 March 2026 02:45:31 +0000 (0:00:07.732) 0:01:43.115 ********* 2026-03-31 02:45:45.249871 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:45:45.249880 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:45:45.249890 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:45:45.249900 | orchestrator | 2026-03-31 02:45:45.249910 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-31 02:45:45.249920 | orchestrator | Tuesday 31 March 2026 02:45:38 +0000 (0:00:06.720) 0:01:49.835 ********* 2026-03-31 02:45:45.249930 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:45:45.249940 | orchestrator | 2026-03-31 02:45:45.249949 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-31 02:45:45.249959 | orchestrator | Tuesday 31 March 2026 02:45:38 +0000 (0:00:00.133) 0:01:49.969 ********* 2026-03-31 02:45:45.249969 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:45:45.249979 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:45:45.249989 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:45:45.249999 | orchestrator | 2026-03-31 02:45:45.250009 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-31 02:45:45.250070 | orchestrator | Tuesday 31 March 2026 02:45:39 +0000 (0:00:01.049) 0:01:51.018 ********* 2026-03-31 02:45:45.250080 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:45:45.250100 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:45:45.250109 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:45:45.250119 | orchestrator | 2026-03-31 02:45:45.250129 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-31 02:45:45.250139 | orchestrator | Tuesday 31 March 2026 02:45:40 +0000 (0:00:00.645) 0:01:51.663 ********* 2026-03-31 02:45:45.250149 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:45:45.250159 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:45:45.250169 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:45:45.250179 | orchestrator | 2026-03-31 02:45:45.250190 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-31 02:45:45.250213 | orchestrator | Tuesday 31 March 2026 02:45:40 +0000 (0:00:00.813) 0:01:52.477 ********* 2026-03-31 02:45:45.250223 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:45:45.250231 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:45:45.250240 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:45:45.250249 | orchestrator | 2026-03-31 02:45:45.250257 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-31 02:45:45.250266 | orchestrator | Tuesday 31 March 2026 02:45:41 +0000 (0:00:00.642) 0:01:53.119 ********* 2026-03-31 02:45:45.250275 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:45:45.250283 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:45:45.250311 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:45:45.250321 | orchestrator | 2026-03-31 02:45:45.250329 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-31 02:45:45.250338 | orchestrator | Tuesday 31 March 2026 02:45:42 +0000 (0:00:01.224) 0:01:54.344 ********* 2026-03-31 02:45:45.250347 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:45:45.250355 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:45:45.250364 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:45:45.250372 | orchestrator | 2026-03-31 02:45:45.250382 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-03-31 02:45:45.250391 | orchestrator | Tuesday 31 March 2026 02:45:43 +0000 (0:00:00.804) 0:01:55.148 ********* 2026-03-31 02:45:45.250399 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:45:45.250408 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:45:45.250416 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:45:45.250425 | orchestrator | 2026-03-31 02:45:45.250481 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-31 02:45:45.250490 | orchestrator | Tuesday 31 March 2026 02:45:43 +0000 (0:00:00.323) 0:01:55.471 ********* 2026-03-31 02:45:45.250501 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:45.250512 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:45.250522 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:45.250531 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:45.250547 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:45.250556 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:45.250565 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:45.250578 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:45.250596 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:52.686455 | orchestrator | 2026-03-31 02:45:52.686531 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-31 02:45:52.686538 | orchestrator | Tuesday 31 March 2026 02:45:45 +0000 (0:00:01.399) 0:01:56.871 ********* 2026-03-31 02:45:52.686544 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:52.686552 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:52.686556 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:52.686561 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:52.686587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:52.686591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:52.686595 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:52.686599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:52.686614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:52.686618 | orchestrator | 2026-03-31 02:45:52.686622 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-31 02:45:52.686626 | orchestrator | Tuesday 31 March 2026 02:45:49 +0000 (0:00:04.017) 0:02:00.888 ********* 2026-03-31 02:45:52.686644 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:52.686651 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:52.686657 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:52.686665 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:52.686677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:52.686683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:52.686690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:52.686696 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:52.686705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 02:45:52.686711 | orchestrator | 2026-03-31 02:45:52.686717 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-31 02:45:52.686722 | orchestrator | Tuesday 31 March 2026 02:45:52 +0000 (0:00:03.220) 0:02:04.108 ********* 2026-03-31 02:45:52.686728 | orchestrator | 2026-03-31 02:45:52.686734 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-31 02:45:52.686741 | orchestrator | Tuesday 31 March 2026 02:45:52 +0000 (0:00:00.063) 0:02:04.172 ********* 2026-03-31 02:45:52.686747 | orchestrator | 2026-03-31 02:45:52.686753 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-31 02:45:52.686759 | orchestrator | Tuesday 31 March 2026 02:45:52 +0000 (0:00:00.068) 0:02:04.241 ********* 2026-03-31 02:45:52.686763 | orchestrator | 2026-03-31 02:45:52.686771 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-31 02:46:17.070547 | orchestrator | Tuesday 31 March 2026 02:45:52 +0000 (0:00:00.066) 0:02:04.308 ********* 2026-03-31 02:46:17.070682 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:46:17.070702 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:46:17.070713 | orchestrator | 2026-03-31 02:46:17.070725 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-31 02:46:17.070738 | orchestrator | Tuesday 31 March 2026 02:45:58 +0000 (0:00:06.218) 0:02:10.526 ********* 2026-03-31 02:46:17.070749 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:46:17.070760 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:46:17.070771 | orchestrator | 2026-03-31 02:46:17.070782 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-31 02:46:17.070819 | orchestrator | Tuesday 31 March 2026 02:46:05 +0000 (0:00:06.215) 0:02:16.742 ********* 2026-03-31 02:46:17.070831 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:46:17.070842 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:46:17.070853 | orchestrator | 2026-03-31 02:46:17.070863 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-31 02:46:17.070874 | orchestrator | Tuesday 31 March 2026 02:46:11 +0000 (0:00:06.140) 0:02:22.882 ********* 2026-03-31 02:46:17.070885 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:46:17.070896 | orchestrator | 2026-03-31 02:46:17.070906 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-31 02:46:17.070917 | orchestrator | Tuesday 31 March 2026 02:46:11 +0000 (0:00:00.173) 0:02:23.056 ********* 2026-03-31 02:46:17.070928 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:46:17.070940 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:46:17.070950 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:46:17.070961 | orchestrator | 2026-03-31 02:46:17.070974 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-31 02:46:17.070987 | orchestrator | Tuesday 31 March 2026 02:46:12 +0000 (0:00:01.121) 0:02:24.178 ********* 2026-03-31 02:46:17.071000 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:46:17.071012 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:46:17.071025 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:46:17.071037 | orchestrator | 2026-03-31 02:46:17.071049 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-31 02:46:17.071062 | orchestrator | Tuesday 31 March 2026 02:46:13 +0000 (0:00:00.645) 0:02:24.824 ********* 2026-03-31 02:46:17.071075 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:46:17.071088 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:46:17.071100 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:46:17.071114 | orchestrator | 2026-03-31 02:46:17.071126 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-31 02:46:17.071139 | orchestrator | Tuesday 31 March 2026 02:46:14 +0000 (0:00:00.825) 0:02:25.649 ********* 2026-03-31 02:46:17.071152 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:46:17.071164 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:46:17.071176 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:46:17.071189 | orchestrator | 2026-03-31 02:46:17.071202 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-31 02:46:17.071214 | orchestrator | Tuesday 31 March 2026 02:46:14 +0000 (0:00:00.666) 0:02:26.315 ********* 2026-03-31 02:46:17.071227 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:46:17.071240 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:46:17.071252 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:46:17.071265 | orchestrator | 2026-03-31 02:46:17.071277 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-31 02:46:17.071290 | orchestrator | Tuesday 31 March 2026 02:46:15 +0000 (0:00:01.079) 0:02:27.395 ********* 2026-03-31 02:46:17.071302 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:46:17.071315 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:46:17.071327 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:46:17.071338 | orchestrator | 2026-03-31 02:46:17.071376 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 02:46:17.071390 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-31 02:46:17.071402 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-31 02:46:17.071413 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-31 02:46:17.071424 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 02:46:17.071444 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 02:46:17.071482 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 02:46:17.071493 | orchestrator | 2026-03-31 02:46:17.071504 | orchestrator | 2026-03-31 02:46:17.071531 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 02:46:17.071542 | orchestrator | Tuesday 31 March 2026 02:46:16 +0000 (0:00:00.870) 0:02:28.265 ********* 2026-03-31 02:46:17.071553 | orchestrator | =============================================================================== 2026-03-31 02:46:17.071564 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 32.31s 2026-03-31 02:46:17.071574 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.65s 2026-03-31 02:46:17.071585 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.95s 2026-03-31 02:46:17.071596 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.88s 2026-03-31 02:46:17.071606 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 12.86s 2026-03-31 02:46:17.071636 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.15s 2026-03-31 02:46:17.071648 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.02s 2026-03-31 02:46:17.071659 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.22s 2026-03-31 02:46:17.071670 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.57s 2026-03-31 02:46:17.071681 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.09s 2026-03-31 02:46:17.071691 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.65s 2026-03-31 02:46:17.071702 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.54s 2026-03-31 02:46:17.071713 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.53s 2026-03-31 02:46:17.071723 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.46s 2026-03-31 02:46:17.071734 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.45s 2026-03-31 02:46:17.071744 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.40s 2026-03-31 02:46:17.071755 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.22s 2026-03-31 02:46:17.071765 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.20s 2026-03-31 02:46:17.071776 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.17s 2026-03-31 02:46:17.071787 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.16s 2026-03-31 02:46:17.402189 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-31 02:46:17.402322 | orchestrator | + sh -c /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh 2026-03-31 02:46:19.590950 | orchestrator | 2026-03-31 02:46:19 | INFO  | Trying to run play wipe-partitions in environment custom 2026-03-31 02:46:29.705050 | orchestrator | 2026-03-31 02:46:29 | INFO  | Task 10acf08f-8857-40b3-a94c-25d9812f2352 (wipe-partitions) was prepared for execution. 2026-03-31 02:46:29.705163 | orchestrator | 2026-03-31 02:46:29 | INFO  | It takes a moment until task 10acf08f-8857-40b3-a94c-25d9812f2352 (wipe-partitions) has been started and output is visible here. 2026-03-31 02:46:43.148565 | orchestrator | 2026-03-31 02:46:43.148649 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-03-31 02:46:43.148656 | orchestrator | 2026-03-31 02:46:43.148660 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-03-31 02:46:43.148664 | orchestrator | Tuesday 31 March 2026 02:46:34 +0000 (0:00:00.150) 0:00:00.150 ********* 2026-03-31 02:46:43.148684 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:46:43.148690 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:46:43.148693 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:46:43.148697 | orchestrator | 2026-03-31 02:46:43.148701 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-03-31 02:46:43.148705 | orchestrator | Tuesday 31 March 2026 02:46:34 +0000 (0:00:00.723) 0:00:00.874 ********* 2026-03-31 02:46:43.148709 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:46:43.148713 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:46:43.148717 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:46:43.148720 | orchestrator | 2026-03-31 02:46:43.148724 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-03-31 02:46:43.148728 | orchestrator | Tuesday 31 March 2026 02:46:35 +0000 (0:00:00.432) 0:00:01.306 ********* 2026-03-31 02:46:43.148732 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:46:43.148737 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:46:43.148741 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:46:43.148744 | orchestrator | 2026-03-31 02:46:43.148748 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-03-31 02:46:43.148752 | orchestrator | Tuesday 31 March 2026 02:46:35 +0000 (0:00:00.619) 0:00:01.926 ********* 2026-03-31 02:46:43.148756 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:46:43.148759 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:46:43.148764 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:46:43.148768 | orchestrator | 2026-03-31 02:46:43.148772 | orchestrator | TASK [Check device availability] *********************************************** 2026-03-31 02:46:43.148775 | orchestrator | Tuesday 31 March 2026 02:46:36 +0000 (0:00:00.273) 0:00:02.199 ********* 2026-03-31 02:46:43.148779 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-31 02:46:43.148783 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-31 02:46:43.148787 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-31 02:46:43.148791 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-31 02:46:43.148794 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-31 02:46:43.148798 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-31 02:46:43.148813 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-31 02:46:43.148817 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-31 02:46:43.148821 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-31 02:46:43.148824 | orchestrator | 2026-03-31 02:46:43.148828 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-03-31 02:46:43.148832 | orchestrator | Tuesday 31 March 2026 02:46:37 +0000 (0:00:01.227) 0:00:03.427 ********* 2026-03-31 02:46:43.148836 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-03-31 02:46:43.148840 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-03-31 02:46:43.148843 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-03-31 02:46:43.148847 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-03-31 02:46:43.148851 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-03-31 02:46:43.148854 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-03-31 02:46:43.148858 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-03-31 02:46:43.148862 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-03-31 02:46:43.148865 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-03-31 02:46:43.148869 | orchestrator | 2026-03-31 02:46:43.148873 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-03-31 02:46:43.148877 | orchestrator | Tuesday 31 March 2026 02:46:39 +0000 (0:00:01.594) 0:00:05.021 ********* 2026-03-31 02:46:43.148880 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-31 02:46:43.148884 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-31 02:46:43.148888 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-31 02:46:43.148891 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-31 02:46:43.148899 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-31 02:46:43.148903 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-31 02:46:43.148907 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-31 02:46:43.148911 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-31 02:46:43.148914 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-31 02:46:43.148918 | orchestrator | 2026-03-31 02:46:43.148922 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-03-31 02:46:43.148925 | orchestrator | Tuesday 31 March 2026 02:46:41 +0000 (0:00:02.295) 0:00:07.317 ********* 2026-03-31 02:46:43.148929 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:46:43.148933 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:46:43.148936 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:46:43.148940 | orchestrator | 2026-03-31 02:46:43.148944 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-03-31 02:46:43.148948 | orchestrator | Tuesday 31 March 2026 02:46:41 +0000 (0:00:00.652) 0:00:07.970 ********* 2026-03-31 02:46:43.148951 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:46:43.148955 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:46:43.148959 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:46:43.148962 | orchestrator | 2026-03-31 02:46:43.148966 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 02:46:43.148971 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 02:46:43.148975 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 02:46:43.148990 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 02:46:43.148994 | orchestrator | 2026-03-31 02:46:43.148998 | orchestrator | 2026-03-31 02:46:43.149002 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 02:46:43.149005 | orchestrator | Tuesday 31 March 2026 02:46:42 +0000 (0:00:00.708) 0:00:08.678 ********* 2026-03-31 02:46:43.149009 | orchestrator | =============================================================================== 2026-03-31 02:46:43.149013 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.30s 2026-03-31 02:46:43.149017 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.59s 2026-03-31 02:46:43.149021 | orchestrator | Check device availability ----------------------------------------------- 1.23s 2026-03-31 02:46:43.149024 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.72s 2026-03-31 02:46:43.149028 | orchestrator | Request device events from the kernel ----------------------------------- 0.71s 2026-03-31 02:46:43.149032 | orchestrator | Reload udev rules ------------------------------------------------------- 0.65s 2026-03-31 02:46:43.149035 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.62s 2026-03-31 02:46:43.149039 | orchestrator | Remove all rook related logical devices --------------------------------- 0.43s 2026-03-31 02:46:43.149043 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.27s 2026-03-31 02:46:55.945697 | orchestrator | 2026-03-31 02:46:55 | INFO  | Task 8b839ec6-587d-4776-a301-39001daffbff (facts) was prepared for execution. 2026-03-31 02:46:55.945809 | orchestrator | 2026-03-31 02:46:55 | INFO  | It takes a moment until task 8b839ec6-587d-4776-a301-39001daffbff (facts) has been started and output is visible here. 2026-03-31 02:47:09.162202 | orchestrator | 2026-03-31 02:47:09.162395 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-31 02:47:09.162426 | orchestrator | 2026-03-31 02:47:09.162446 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-31 02:47:09.162466 | orchestrator | Tuesday 31 March 2026 02:47:00 +0000 (0:00:00.284) 0:00:00.284 ********* 2026-03-31 02:47:09.162585 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:47:09.162610 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:47:09.162627 | orchestrator | ok: [testbed-manager] 2026-03-31 02:47:09.162638 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:47:09.162649 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:47:09.162662 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:47:09.162675 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:47:09.162687 | orchestrator | 2026-03-31 02:47:09.162699 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-31 02:47:09.162713 | orchestrator | Tuesday 31 March 2026 02:47:01 +0000 (0:00:01.144) 0:00:01.428 ********* 2026-03-31 02:47:09.162726 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:47:09.162740 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:47:09.162752 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:47:09.162764 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:47:09.162776 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:47:09.162789 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:47:09.162802 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:47:09.162814 | orchestrator | 2026-03-31 02:47:09.162827 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-31 02:47:09.162838 | orchestrator | 2026-03-31 02:47:09.162851 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-31 02:47:09.162863 | orchestrator | Tuesday 31 March 2026 02:47:02 +0000 (0:00:01.344) 0:00:02.773 ********* 2026-03-31 02:47:09.162876 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:47:09.162889 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:47:09.162901 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:47:09.162913 | orchestrator | ok: [testbed-manager] 2026-03-31 02:47:09.162925 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:47:09.162937 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:47:09.162950 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:47:09.162962 | orchestrator | 2026-03-31 02:47:09.162975 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-31 02:47:09.162987 | orchestrator | 2026-03-31 02:47:09.163000 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-31 02:47:09.163012 | orchestrator | Tuesday 31 March 2026 02:47:08 +0000 (0:00:05.138) 0:00:07.911 ********* 2026-03-31 02:47:09.163025 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:47:09.163038 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:47:09.163049 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:47:09.163060 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:47:09.163070 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:47:09.163081 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:47:09.163091 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:47:09.163102 | orchestrator | 2026-03-31 02:47:09.163113 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 02:47:09.163124 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 02:47:09.163224 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 02:47:09.163244 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 02:47:09.163256 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 02:47:09.163266 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 02:47:09.163278 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 02:47:09.163298 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 02:47:09.163309 | orchestrator | 2026-03-31 02:47:09.163320 | orchestrator | 2026-03-31 02:47:09.163331 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 02:47:09.163342 | orchestrator | Tuesday 31 March 2026 02:47:08 +0000 (0:00:00.578) 0:00:08.490 ********* 2026-03-31 02:47:09.163353 | orchestrator | =============================================================================== 2026-03-31 02:47:09.163364 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.14s 2026-03-31 02:47:09.163374 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.34s 2026-03-31 02:47:09.163385 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.14s 2026-03-31 02:47:09.163396 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.58s 2026-03-31 02:47:11.773458 | orchestrator | 2026-03-31 02:47:11 | INFO  | Task 84828f76-375d-4ef0-85c7-ce6e7426e7a3 (ceph-configure-lvm-volumes) was prepared for execution. 2026-03-31 02:47:11.773600 | orchestrator | 2026-03-31 02:47:11 | INFO  | It takes a moment until task 84828f76-375d-4ef0-85c7-ce6e7426e7a3 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-03-31 02:47:24.718151 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-31 02:47:24.718253 | orchestrator | 2.16.14 2026-03-31 02:47:24.718266 | orchestrator | 2026-03-31 02:47:24.718274 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-31 02:47:24.718281 | orchestrator | 2026-03-31 02:47:24.718287 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-31 02:47:24.718294 | orchestrator | Tuesday 31 March 2026 02:47:16 +0000 (0:00:00.374) 0:00:00.374 ********* 2026-03-31 02:47:24.718302 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-31 02:47:24.718307 | orchestrator | 2026-03-31 02:47:24.718328 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-31 02:47:24.718335 | orchestrator | Tuesday 31 March 2026 02:47:16 +0000 (0:00:00.292) 0:00:00.667 ********* 2026-03-31 02:47:24.718341 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:47:24.718348 | orchestrator | 2026-03-31 02:47:24.718354 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:47:24.718360 | orchestrator | Tuesday 31 March 2026 02:47:17 +0000 (0:00:00.249) 0:00:00.916 ********* 2026-03-31 02:47:24.718365 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-31 02:47:24.718371 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-31 02:47:24.718377 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-31 02:47:24.718382 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-31 02:47:24.718387 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-31 02:47:24.718393 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-31 02:47:24.718399 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-31 02:47:24.718405 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-31 02:47:24.718411 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-31 02:47:24.718418 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-31 02:47:24.718424 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-31 02:47:24.718429 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-31 02:47:24.718457 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-31 02:47:24.718463 | orchestrator | 2026-03-31 02:47:24.718469 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:47:24.718475 | orchestrator | Tuesday 31 March 2026 02:47:17 +0000 (0:00:00.517) 0:00:01.434 ********* 2026-03-31 02:47:24.718482 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:47:24.718488 | orchestrator | 2026-03-31 02:47:24.718494 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:47:24.718500 | orchestrator | Tuesday 31 March 2026 02:47:17 +0000 (0:00:00.229) 0:00:01.664 ********* 2026-03-31 02:47:24.718506 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:47:24.718545 | orchestrator | 2026-03-31 02:47:24.718552 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:47:24.718557 | orchestrator | Tuesday 31 March 2026 02:47:18 +0000 (0:00:00.211) 0:00:01.875 ********* 2026-03-31 02:47:24.718563 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:47:24.718568 | orchestrator | 2026-03-31 02:47:24.718574 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:47:24.718579 | orchestrator | Tuesday 31 March 2026 02:47:18 +0000 (0:00:00.231) 0:00:02.106 ********* 2026-03-31 02:47:24.718585 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:47:24.718591 | orchestrator | 2026-03-31 02:47:24.718598 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:47:24.718604 | orchestrator | Tuesday 31 March 2026 02:47:18 +0000 (0:00:00.207) 0:00:02.313 ********* 2026-03-31 02:47:24.718610 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:47:24.718616 | orchestrator | 2026-03-31 02:47:24.718622 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:47:24.718629 | orchestrator | Tuesday 31 March 2026 02:47:18 +0000 (0:00:00.208) 0:00:02.522 ********* 2026-03-31 02:47:24.718635 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:47:24.718642 | orchestrator | 2026-03-31 02:47:24.718647 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:47:24.718653 | orchestrator | Tuesday 31 March 2026 02:47:19 +0000 (0:00:00.214) 0:00:02.737 ********* 2026-03-31 02:47:24.718659 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:47:24.718665 | orchestrator | 2026-03-31 02:47:24.718672 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:47:24.718678 | orchestrator | Tuesday 31 March 2026 02:47:19 +0000 (0:00:00.227) 0:00:02.964 ********* 2026-03-31 02:47:24.718685 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:47:24.718692 | orchestrator | 2026-03-31 02:47:24.718698 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:47:24.718705 | orchestrator | Tuesday 31 March 2026 02:47:19 +0000 (0:00:00.213) 0:00:03.178 ********* 2026-03-31 02:47:24.718711 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047) 2026-03-31 02:47:24.718720 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047) 2026-03-31 02:47:24.718726 | orchestrator | 2026-03-31 02:47:24.718732 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:47:24.718756 | orchestrator | Tuesday 31 March 2026 02:47:19 +0000 (0:00:00.456) 0:00:03.634 ********* 2026-03-31 02:47:24.718764 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_820fa545-b298-47e1-b072-447ef233e5c9) 2026-03-31 02:47:24.718770 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_820fa545-b298-47e1-b072-447ef233e5c9) 2026-03-31 02:47:24.718777 | orchestrator | 2026-03-31 02:47:24.718783 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:47:24.718790 | orchestrator | Tuesday 31 March 2026 02:47:20 +0000 (0:00:00.695) 0:00:04.329 ********* 2026-03-31 02:47:24.718803 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c466d3ef-6614-47a1-86d1-ef83336ce84c) 2026-03-31 02:47:24.718818 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c466d3ef-6614-47a1-86d1-ef83336ce84c) 2026-03-31 02:47:24.718825 | orchestrator | 2026-03-31 02:47:24.718832 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:47:24.718839 | orchestrator | Tuesday 31 March 2026 02:47:21 +0000 (0:00:00.686) 0:00:05.016 ********* 2026-03-31 02:47:24.718845 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_a878a648-90f8-45a8-8930-74e801ae2e4e) 2026-03-31 02:47:24.718851 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_a878a648-90f8-45a8-8930-74e801ae2e4e) 2026-03-31 02:47:24.718858 | orchestrator | 2026-03-31 02:47:24.718864 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:47:24.718871 | orchestrator | Tuesday 31 March 2026 02:47:22 +0000 (0:00:00.942) 0:00:05.959 ********* 2026-03-31 02:47:24.718877 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-31 02:47:24.718884 | orchestrator | 2026-03-31 02:47:24.718890 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:47:24.718897 | orchestrator | Tuesday 31 March 2026 02:47:22 +0000 (0:00:00.349) 0:00:06.308 ********* 2026-03-31 02:47:24.718904 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-31 02:47:24.718908 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-31 02:47:24.718913 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-31 02:47:24.718917 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-31 02:47:24.718921 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-31 02:47:24.718925 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-31 02:47:24.718930 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-31 02:47:24.718934 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-31 02:47:24.718938 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-31 02:47:24.718942 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-31 02:47:24.718946 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-31 02:47:24.718951 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-31 02:47:24.718955 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-31 02:47:24.718959 | orchestrator | 2026-03-31 02:47:24.718964 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:47:24.718968 | orchestrator | Tuesday 31 March 2026 02:47:23 +0000 (0:00:00.469) 0:00:06.778 ********* 2026-03-31 02:47:24.718972 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:47:24.718976 | orchestrator | 2026-03-31 02:47:24.718980 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:47:24.718984 | orchestrator | Tuesday 31 March 2026 02:47:23 +0000 (0:00:00.219) 0:00:06.997 ********* 2026-03-31 02:47:24.718987 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:47:24.718991 | orchestrator | 2026-03-31 02:47:24.718995 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:47:24.718998 | orchestrator | Tuesday 31 March 2026 02:47:23 +0000 (0:00:00.233) 0:00:07.231 ********* 2026-03-31 02:47:24.719002 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:47:24.719006 | orchestrator | 2026-03-31 02:47:24.719009 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:47:24.719013 | orchestrator | Tuesday 31 March 2026 02:47:23 +0000 (0:00:00.247) 0:00:07.478 ********* 2026-03-31 02:47:24.719020 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:47:24.719024 | orchestrator | 2026-03-31 02:47:24.719028 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:47:24.719031 | orchestrator | Tuesday 31 March 2026 02:47:24 +0000 (0:00:00.255) 0:00:07.734 ********* 2026-03-31 02:47:24.719035 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:47:24.719039 | orchestrator | 2026-03-31 02:47:24.719042 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:47:24.719046 | orchestrator | Tuesday 31 March 2026 02:47:24 +0000 (0:00:00.232) 0:00:07.967 ********* 2026-03-31 02:47:24.719050 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:47:24.719054 | orchestrator | 2026-03-31 02:47:24.719057 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:47:24.719061 | orchestrator | Tuesday 31 March 2026 02:47:24 +0000 (0:00:00.237) 0:00:08.204 ********* 2026-03-31 02:47:24.719065 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:47:24.719069 | orchestrator | 2026-03-31 02:47:24.719076 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:47:33.004360 | orchestrator | Tuesday 31 March 2026 02:47:24 +0000 (0:00:00.231) 0:00:08.436 ********* 2026-03-31 02:47:33.004459 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:47:33.004472 | orchestrator | 2026-03-31 02:47:33.004480 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:47:33.004488 | orchestrator | Tuesday 31 March 2026 02:47:24 +0000 (0:00:00.227) 0:00:08.663 ********* 2026-03-31 02:47:33.004496 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-31 02:47:33.004504 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-31 02:47:33.004512 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-31 02:47:33.004588 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-31 02:47:33.004606 | orchestrator | 2026-03-31 02:47:33.004623 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:47:33.004635 | orchestrator | Tuesday 31 March 2026 02:47:26 +0000 (0:00:01.122) 0:00:09.786 ********* 2026-03-31 02:47:33.004648 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:47:33.004661 | orchestrator | 2026-03-31 02:47:33.004675 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:47:33.004689 | orchestrator | Tuesday 31 March 2026 02:47:26 +0000 (0:00:00.222) 0:00:10.009 ********* 2026-03-31 02:47:33.004701 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:47:33.004709 | orchestrator | 2026-03-31 02:47:33.004721 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:47:33.004733 | orchestrator | Tuesday 31 March 2026 02:47:26 +0000 (0:00:00.255) 0:00:10.264 ********* 2026-03-31 02:47:33.004749 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:47:33.004766 | orchestrator | 2026-03-31 02:47:33.004777 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:47:33.004788 | orchestrator | Tuesday 31 March 2026 02:47:26 +0000 (0:00:00.233) 0:00:10.497 ********* 2026-03-31 02:47:33.004799 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:47:33.004811 | orchestrator | 2026-03-31 02:47:33.004822 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-31 02:47:33.004833 | orchestrator | Tuesday 31 March 2026 02:47:26 +0000 (0:00:00.207) 0:00:10.705 ********* 2026-03-31 02:47:33.004845 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-03-31 02:47:33.004856 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-03-31 02:47:33.004870 | orchestrator | 2026-03-31 02:47:33.004882 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-31 02:47:33.004895 | orchestrator | Tuesday 31 March 2026 02:47:27 +0000 (0:00:00.203) 0:00:10.909 ********* 2026-03-31 02:47:33.004907 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:47:33.004920 | orchestrator | 2026-03-31 02:47:33.004929 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-31 02:47:33.004938 | orchestrator | Tuesday 31 March 2026 02:47:27 +0000 (0:00:00.136) 0:00:11.045 ********* 2026-03-31 02:47:33.004964 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:47:33.004973 | orchestrator | 2026-03-31 02:47:33.004981 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-31 02:47:33.004990 | orchestrator | Tuesday 31 March 2026 02:47:27 +0000 (0:00:00.150) 0:00:11.196 ********* 2026-03-31 02:47:33.004998 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:47:33.005006 | orchestrator | 2026-03-31 02:47:33.005015 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-31 02:47:33.005024 | orchestrator | Tuesday 31 March 2026 02:47:27 +0000 (0:00:00.151) 0:00:11.347 ********* 2026-03-31 02:47:33.005032 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:47:33.005041 | orchestrator | 2026-03-31 02:47:33.005051 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-31 02:47:33.005063 | orchestrator | Tuesday 31 March 2026 02:47:27 +0000 (0:00:00.142) 0:00:11.490 ********* 2026-03-31 02:47:33.005075 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'dad98f55-09f4-5a2b-a5c7-aafce2660c53'}}) 2026-03-31 02:47:33.005087 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '67174221-9040-517a-ae84-daf8ebd704d7'}}) 2026-03-31 02:47:33.005099 | orchestrator | 2026-03-31 02:47:33.005113 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-31 02:47:33.005126 | orchestrator | Tuesday 31 March 2026 02:47:27 +0000 (0:00:00.198) 0:00:11.688 ********* 2026-03-31 02:47:33.005139 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'dad98f55-09f4-5a2b-a5c7-aafce2660c53'}})  2026-03-31 02:47:33.005152 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '67174221-9040-517a-ae84-daf8ebd704d7'}})  2026-03-31 02:47:33.005161 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:47:33.005169 | orchestrator | 2026-03-31 02:47:33.005178 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-31 02:47:33.005186 | orchestrator | Tuesday 31 March 2026 02:47:28 +0000 (0:00:00.381) 0:00:12.069 ********* 2026-03-31 02:47:33.005195 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'dad98f55-09f4-5a2b-a5c7-aafce2660c53'}})  2026-03-31 02:47:33.005204 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '67174221-9040-517a-ae84-daf8ebd704d7'}})  2026-03-31 02:47:33.005212 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:47:33.005221 | orchestrator | 2026-03-31 02:47:33.005229 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-31 02:47:33.005238 | orchestrator | Tuesday 31 March 2026 02:47:28 +0000 (0:00:00.170) 0:00:12.240 ********* 2026-03-31 02:47:33.005247 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'dad98f55-09f4-5a2b-a5c7-aafce2660c53'}})  2026-03-31 02:47:33.005273 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '67174221-9040-517a-ae84-daf8ebd704d7'}})  2026-03-31 02:47:33.005281 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:47:33.005288 | orchestrator | 2026-03-31 02:47:33.005296 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-31 02:47:33.005303 | orchestrator | Tuesday 31 March 2026 02:47:28 +0000 (0:00:00.155) 0:00:12.396 ********* 2026-03-31 02:47:33.005311 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:47:33.005318 | orchestrator | 2026-03-31 02:47:33.005325 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-31 02:47:33.005339 | orchestrator | Tuesday 31 March 2026 02:47:28 +0000 (0:00:00.158) 0:00:12.554 ********* 2026-03-31 02:47:33.005347 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:47:33.005354 | orchestrator | 2026-03-31 02:47:33.005361 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-31 02:47:33.005368 | orchestrator | Tuesday 31 March 2026 02:47:28 +0000 (0:00:00.150) 0:00:12.705 ********* 2026-03-31 02:47:33.005382 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:47:33.005390 | orchestrator | 2026-03-31 02:47:33.005397 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-31 02:47:33.005404 | orchestrator | Tuesday 31 March 2026 02:47:29 +0000 (0:00:00.126) 0:00:12.832 ********* 2026-03-31 02:47:33.005414 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:47:33.005426 | orchestrator | 2026-03-31 02:47:33.005445 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-31 02:47:33.005457 | orchestrator | Tuesday 31 March 2026 02:47:29 +0000 (0:00:00.146) 0:00:12.978 ********* 2026-03-31 02:47:33.005468 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:47:33.005480 | orchestrator | 2026-03-31 02:47:33.005491 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-31 02:47:33.005502 | orchestrator | Tuesday 31 March 2026 02:47:29 +0000 (0:00:00.161) 0:00:13.140 ********* 2026-03-31 02:47:33.005569 | orchestrator | ok: [testbed-node-3] => { 2026-03-31 02:47:33.005586 | orchestrator |  "ceph_osd_devices": { 2026-03-31 02:47:33.005599 | orchestrator |  "sdb": { 2026-03-31 02:47:33.005613 | orchestrator |  "osd_lvm_uuid": "dad98f55-09f4-5a2b-a5c7-aafce2660c53" 2026-03-31 02:47:33.005626 | orchestrator |  }, 2026-03-31 02:47:33.005639 | orchestrator |  "sdc": { 2026-03-31 02:47:33.005652 | orchestrator |  "osd_lvm_uuid": "67174221-9040-517a-ae84-daf8ebd704d7" 2026-03-31 02:47:33.005666 | orchestrator |  } 2026-03-31 02:47:33.005679 | orchestrator |  } 2026-03-31 02:47:33.005693 | orchestrator | } 2026-03-31 02:47:33.005706 | orchestrator | 2026-03-31 02:47:33.005718 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-31 02:47:33.005731 | orchestrator | Tuesday 31 March 2026 02:47:29 +0000 (0:00:00.167) 0:00:13.307 ********* 2026-03-31 02:47:33.005744 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:47:33.005757 | orchestrator | 2026-03-31 02:47:33.005770 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-31 02:47:33.005784 | orchestrator | Tuesday 31 March 2026 02:47:29 +0000 (0:00:00.141) 0:00:13.449 ********* 2026-03-31 02:47:33.005797 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:47:33.005811 | orchestrator | 2026-03-31 02:47:33.005823 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-31 02:47:33.005836 | orchestrator | Tuesday 31 March 2026 02:47:29 +0000 (0:00:00.164) 0:00:13.614 ********* 2026-03-31 02:47:33.005850 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:47:33.005862 | orchestrator | 2026-03-31 02:47:33.005876 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-31 02:47:33.005889 | orchestrator | Tuesday 31 March 2026 02:47:30 +0000 (0:00:00.153) 0:00:13.767 ********* 2026-03-31 02:47:33.005902 | orchestrator | changed: [testbed-node-3] => { 2026-03-31 02:47:33.005915 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-31 02:47:33.005928 | orchestrator |  "ceph_osd_devices": { 2026-03-31 02:47:33.005942 | orchestrator |  "sdb": { 2026-03-31 02:47:33.005955 | orchestrator |  "osd_lvm_uuid": "dad98f55-09f4-5a2b-a5c7-aafce2660c53" 2026-03-31 02:47:33.005968 | orchestrator |  }, 2026-03-31 02:47:33.005982 | orchestrator |  "sdc": { 2026-03-31 02:47:33.005995 | orchestrator |  "osd_lvm_uuid": "67174221-9040-517a-ae84-daf8ebd704d7" 2026-03-31 02:47:33.006009 | orchestrator |  } 2026-03-31 02:47:33.006086 | orchestrator |  }, 2026-03-31 02:47:33.006099 | orchestrator |  "lvm_volumes": [ 2026-03-31 02:47:33.006112 | orchestrator |  { 2026-03-31 02:47:33.006125 | orchestrator |  "data": "osd-block-dad98f55-09f4-5a2b-a5c7-aafce2660c53", 2026-03-31 02:47:33.006138 | orchestrator |  "data_vg": "ceph-dad98f55-09f4-5a2b-a5c7-aafce2660c53" 2026-03-31 02:47:33.006150 | orchestrator |  }, 2026-03-31 02:47:33.006162 | orchestrator |  { 2026-03-31 02:47:33.006174 | orchestrator |  "data": "osd-block-67174221-9040-517a-ae84-daf8ebd704d7", 2026-03-31 02:47:33.006198 | orchestrator |  "data_vg": "ceph-67174221-9040-517a-ae84-daf8ebd704d7" 2026-03-31 02:47:33.006212 | orchestrator |  } 2026-03-31 02:47:33.006224 | orchestrator |  ] 2026-03-31 02:47:33.006236 | orchestrator |  } 2026-03-31 02:47:33.006248 | orchestrator | } 2026-03-31 02:47:33.006260 | orchestrator | 2026-03-31 02:47:33.006272 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-31 02:47:33.006285 | orchestrator | Tuesday 31 March 2026 02:47:30 +0000 (0:00:00.480) 0:00:14.248 ********* 2026-03-31 02:47:33.006298 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-31 02:47:33.006310 | orchestrator | 2026-03-31 02:47:33.006322 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-31 02:47:33.006334 | orchestrator | 2026-03-31 02:47:33.006346 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-31 02:47:33.006359 | orchestrator | Tuesday 31 March 2026 02:47:32 +0000 (0:00:01.947) 0:00:16.195 ********* 2026-03-31 02:47:33.006371 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-31 02:47:33.006383 | orchestrator | 2026-03-31 02:47:33.006396 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-31 02:47:33.006408 | orchestrator | Tuesday 31 March 2026 02:47:32 +0000 (0:00:00.272) 0:00:16.468 ********* 2026-03-31 02:47:33.006420 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:47:33.006432 | orchestrator | 2026-03-31 02:47:33.006455 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:47:43.707207 | orchestrator | Tuesday 31 March 2026 02:47:32 +0000 (0:00:00.258) 0:00:16.727 ********* 2026-03-31 02:47:43.707327 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-31 02:47:43.707342 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-31 02:47:43.707354 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-31 02:47:43.707383 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-31 02:47:43.707403 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-31 02:47:43.707422 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-31 02:47:43.707439 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-31 02:47:43.707458 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-31 02:47:43.707476 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-31 02:47:43.707494 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-31 02:47:43.707513 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-31 02:47:43.707593 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-31 02:47:43.707620 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-31 02:47:43.707640 | orchestrator | 2026-03-31 02:47:43.707660 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:47:43.707679 | orchestrator | Tuesday 31 March 2026 02:47:33 +0000 (0:00:00.458) 0:00:17.185 ********* 2026-03-31 02:47:43.707696 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:47:43.707716 | orchestrator | 2026-03-31 02:47:43.707735 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:47:43.707756 | orchestrator | Tuesday 31 March 2026 02:47:33 +0000 (0:00:00.257) 0:00:17.442 ********* 2026-03-31 02:47:43.707776 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:47:43.707795 | orchestrator | 2026-03-31 02:47:43.707813 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:47:43.707834 | orchestrator | Tuesday 31 March 2026 02:47:33 +0000 (0:00:00.218) 0:00:17.661 ********* 2026-03-31 02:47:43.707885 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:47:43.707908 | orchestrator | 2026-03-31 02:47:43.707928 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:47:43.707949 | orchestrator | Tuesday 31 March 2026 02:47:34 +0000 (0:00:00.212) 0:00:17.873 ********* 2026-03-31 02:47:43.707969 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:47:43.707991 | orchestrator | 2026-03-31 02:47:43.708011 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:47:43.708032 | orchestrator | Tuesday 31 March 2026 02:47:34 +0000 (0:00:00.701) 0:00:18.574 ********* 2026-03-31 02:47:43.708052 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:47:43.708071 | orchestrator | 2026-03-31 02:47:43.708090 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:47:43.708110 | orchestrator | Tuesday 31 March 2026 02:47:35 +0000 (0:00:00.248) 0:00:18.823 ********* 2026-03-31 02:47:43.708131 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:47:43.708151 | orchestrator | 2026-03-31 02:47:43.708172 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:47:43.708192 | orchestrator | Tuesday 31 March 2026 02:47:35 +0000 (0:00:00.254) 0:00:19.077 ********* 2026-03-31 02:47:43.708211 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:47:43.708231 | orchestrator | 2026-03-31 02:47:43.708250 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:47:43.708270 | orchestrator | Tuesday 31 March 2026 02:47:35 +0000 (0:00:00.217) 0:00:19.295 ********* 2026-03-31 02:47:43.708290 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:47:43.708310 | orchestrator | 2026-03-31 02:47:43.708329 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:47:43.708348 | orchestrator | Tuesday 31 March 2026 02:47:35 +0000 (0:00:00.221) 0:00:19.516 ********* 2026-03-31 02:47:43.708367 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031) 2026-03-31 02:47:43.708387 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031) 2026-03-31 02:47:43.708408 | orchestrator | 2026-03-31 02:47:43.708428 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:47:43.708449 | orchestrator | Tuesday 31 March 2026 02:47:36 +0000 (0:00:00.516) 0:00:20.033 ********* 2026-03-31 02:47:43.708468 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_627ac388-afe2-405e-bfb6-93a96eeb5247) 2026-03-31 02:47:43.708487 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_627ac388-afe2-405e-bfb6-93a96eeb5247) 2026-03-31 02:47:43.708508 | orchestrator | 2026-03-31 02:47:43.708556 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:47:43.708570 | orchestrator | Tuesday 31 March 2026 02:47:36 +0000 (0:00:00.487) 0:00:20.520 ********* 2026-03-31 02:47:43.708581 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_aca90cda-810a-4a3a-a8a4-a9246b552814) 2026-03-31 02:47:43.708592 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_aca90cda-810a-4a3a-a8a4-a9246b552814) 2026-03-31 02:47:43.708602 | orchestrator | 2026-03-31 02:47:43.708613 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:47:43.708646 | orchestrator | Tuesday 31 March 2026 02:47:37 +0000 (0:00:00.508) 0:00:21.029 ********* 2026-03-31 02:47:43.708658 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5a64e844-a251-4ee7-a817-d55da64d6351) 2026-03-31 02:47:43.708669 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5a64e844-a251-4ee7-a817-d55da64d6351) 2026-03-31 02:47:43.708680 | orchestrator | 2026-03-31 02:47:43.708690 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:47:43.708711 | orchestrator | Tuesday 31 March 2026 02:47:38 +0000 (0:00:00.869) 0:00:21.899 ********* 2026-03-31 02:47:43.708722 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-31 02:47:43.708747 | orchestrator | 2026-03-31 02:47:43.708758 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:47:43.708768 | orchestrator | Tuesday 31 March 2026 02:47:38 +0000 (0:00:00.764) 0:00:22.663 ********* 2026-03-31 02:47:43.708779 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-31 02:47:43.708789 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-31 02:47:43.708800 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-31 02:47:43.708811 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-31 02:47:43.708821 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-31 02:47:43.708832 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-31 02:47:43.708842 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-31 02:47:43.708853 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-31 02:47:43.708864 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-31 02:47:43.708874 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-31 02:47:43.708885 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-31 02:47:43.708896 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-31 02:47:43.708907 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-31 02:47:43.708918 | orchestrator | 2026-03-31 02:47:43.708928 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:47:43.708939 | orchestrator | Tuesday 31 March 2026 02:47:40 +0000 (0:00:01.085) 0:00:23.748 ********* 2026-03-31 02:47:43.708950 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:47:43.708960 | orchestrator | 2026-03-31 02:47:43.708971 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:47:43.708982 | orchestrator | Tuesday 31 March 2026 02:47:40 +0000 (0:00:00.250) 0:00:23.999 ********* 2026-03-31 02:47:43.708992 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:47:43.709003 | orchestrator | 2026-03-31 02:47:43.709014 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:47:43.709024 | orchestrator | Tuesday 31 March 2026 02:47:40 +0000 (0:00:00.239) 0:00:24.239 ********* 2026-03-31 02:47:43.709035 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:47:43.709046 | orchestrator | 2026-03-31 02:47:43.709056 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:47:43.709067 | orchestrator | Tuesday 31 March 2026 02:47:40 +0000 (0:00:00.223) 0:00:24.463 ********* 2026-03-31 02:47:43.709078 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:47:43.709088 | orchestrator | 2026-03-31 02:47:43.709099 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:47:43.709110 | orchestrator | Tuesday 31 March 2026 02:47:41 +0000 (0:00:00.287) 0:00:24.750 ********* 2026-03-31 02:47:43.709120 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:47:43.709131 | orchestrator | 2026-03-31 02:47:43.709141 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:47:43.709152 | orchestrator | Tuesday 31 March 2026 02:47:41 +0000 (0:00:00.253) 0:00:25.003 ********* 2026-03-31 02:47:43.709163 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:47:43.709174 | orchestrator | 2026-03-31 02:47:43.709184 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:47:43.709195 | orchestrator | Tuesday 31 March 2026 02:47:41 +0000 (0:00:00.244) 0:00:25.247 ********* 2026-03-31 02:47:43.709205 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:47:43.709223 | orchestrator | 2026-03-31 02:47:43.709234 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:47:43.709244 | orchestrator | Tuesday 31 March 2026 02:47:41 +0000 (0:00:00.238) 0:00:25.486 ********* 2026-03-31 02:47:43.709255 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:47:43.709266 | orchestrator | 2026-03-31 02:47:43.709276 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:47:43.709287 | orchestrator | Tuesday 31 March 2026 02:47:41 +0000 (0:00:00.229) 0:00:25.716 ********* 2026-03-31 02:47:43.709297 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-31 02:47:43.709309 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-31 02:47:43.709328 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-31 02:47:43.709346 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-31 02:47:43.709365 | orchestrator | 2026-03-31 02:47:43.709383 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:47:43.709400 | orchestrator | Tuesday 31 March 2026 02:47:42 +0000 (0:00:00.986) 0:00:26.702 ********* 2026-03-31 02:47:43.709418 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:47:50.457111 | orchestrator | 2026-03-31 02:47:50.457217 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:47:50.457234 | orchestrator | Tuesday 31 March 2026 02:47:43 +0000 (0:00:00.722) 0:00:27.425 ********* 2026-03-31 02:47:50.457246 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:47:50.457256 | orchestrator | 2026-03-31 02:47:50.457267 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:47:50.457277 | orchestrator | Tuesday 31 March 2026 02:47:43 +0000 (0:00:00.247) 0:00:27.672 ********* 2026-03-31 02:47:50.457302 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:47:50.457312 | orchestrator | 2026-03-31 02:47:50.457322 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:47:50.457331 | orchestrator | Tuesday 31 March 2026 02:47:44 +0000 (0:00:00.236) 0:00:27.909 ********* 2026-03-31 02:47:50.457341 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:47:50.457351 | orchestrator | 2026-03-31 02:47:50.457361 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-31 02:47:50.457371 | orchestrator | Tuesday 31 March 2026 02:47:44 +0000 (0:00:00.258) 0:00:28.168 ********* 2026-03-31 02:47:50.457380 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-03-31 02:47:50.457391 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-03-31 02:47:50.457401 | orchestrator | 2026-03-31 02:47:50.457410 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-31 02:47:50.457420 | orchestrator | Tuesday 31 March 2026 02:47:44 +0000 (0:00:00.196) 0:00:28.364 ********* 2026-03-31 02:47:50.457429 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:47:50.457439 | orchestrator | 2026-03-31 02:47:50.457449 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-31 02:47:50.457458 | orchestrator | Tuesday 31 March 2026 02:47:44 +0000 (0:00:00.152) 0:00:28.517 ********* 2026-03-31 02:47:50.457468 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:47:50.457477 | orchestrator | 2026-03-31 02:47:50.457487 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-31 02:47:50.457497 | orchestrator | Tuesday 31 March 2026 02:47:44 +0000 (0:00:00.188) 0:00:28.706 ********* 2026-03-31 02:47:50.457506 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:47:50.457516 | orchestrator | 2026-03-31 02:47:50.457526 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-31 02:47:50.457586 | orchestrator | Tuesday 31 March 2026 02:47:45 +0000 (0:00:00.148) 0:00:28.854 ********* 2026-03-31 02:47:50.457597 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:47:50.457608 | orchestrator | 2026-03-31 02:47:50.457617 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-31 02:47:50.457627 | orchestrator | Tuesday 31 March 2026 02:47:45 +0000 (0:00:00.160) 0:00:29.015 ********* 2026-03-31 02:47:50.457658 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb'}}) 2026-03-31 02:47:50.457670 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'da0b55d5-13d5-528b-aee2-5667f342587c'}}) 2026-03-31 02:47:50.457683 | orchestrator | 2026-03-31 02:47:50.457694 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-31 02:47:50.457705 | orchestrator | Tuesday 31 March 2026 02:47:45 +0000 (0:00:00.201) 0:00:29.216 ********* 2026-03-31 02:47:50.457718 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb'}})  2026-03-31 02:47:50.457730 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'da0b55d5-13d5-528b-aee2-5667f342587c'}})  2026-03-31 02:47:50.457741 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:47:50.457752 | orchestrator | 2026-03-31 02:47:50.457763 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-31 02:47:50.457774 | orchestrator | Tuesday 31 March 2026 02:47:45 +0000 (0:00:00.164) 0:00:29.381 ********* 2026-03-31 02:47:50.457785 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb'}})  2026-03-31 02:47:50.457795 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'da0b55d5-13d5-528b-aee2-5667f342587c'}})  2026-03-31 02:47:50.457806 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:47:50.457818 | orchestrator | 2026-03-31 02:47:50.457828 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-31 02:47:50.457839 | orchestrator | Tuesday 31 March 2026 02:47:46 +0000 (0:00:00.440) 0:00:29.821 ********* 2026-03-31 02:47:50.457850 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb'}})  2026-03-31 02:47:50.457861 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'da0b55d5-13d5-528b-aee2-5667f342587c'}})  2026-03-31 02:47:50.457872 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:47:50.457883 | orchestrator | 2026-03-31 02:47:50.457894 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-31 02:47:50.457905 | orchestrator | Tuesday 31 March 2026 02:47:46 +0000 (0:00:00.223) 0:00:30.045 ********* 2026-03-31 02:47:50.457916 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:47:50.457928 | orchestrator | 2026-03-31 02:47:50.457938 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-31 02:47:50.457948 | orchestrator | Tuesday 31 March 2026 02:47:46 +0000 (0:00:00.162) 0:00:30.207 ********* 2026-03-31 02:47:50.457957 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:47:50.457967 | orchestrator | 2026-03-31 02:47:50.457976 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-31 02:47:50.457985 | orchestrator | Tuesday 31 March 2026 02:47:46 +0000 (0:00:00.151) 0:00:30.359 ********* 2026-03-31 02:47:50.458096 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:47:50.458113 | orchestrator | 2026-03-31 02:47:50.458123 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-31 02:47:50.458133 | orchestrator | Tuesday 31 March 2026 02:47:46 +0000 (0:00:00.145) 0:00:30.505 ********* 2026-03-31 02:47:50.458142 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:47:50.458152 | orchestrator | 2026-03-31 02:47:50.458161 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-31 02:47:50.458171 | orchestrator | Tuesday 31 March 2026 02:47:46 +0000 (0:00:00.151) 0:00:30.656 ********* 2026-03-31 02:47:50.458187 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:47:50.458196 | orchestrator | 2026-03-31 02:47:50.458206 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-31 02:47:50.458215 | orchestrator | Tuesday 31 March 2026 02:47:47 +0000 (0:00:00.160) 0:00:30.817 ********* 2026-03-31 02:47:50.458233 | orchestrator | ok: [testbed-node-4] => { 2026-03-31 02:47:50.458243 | orchestrator |  "ceph_osd_devices": { 2026-03-31 02:47:50.458252 | orchestrator |  "sdb": { 2026-03-31 02:47:50.458262 | orchestrator |  "osd_lvm_uuid": "ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb" 2026-03-31 02:47:50.458272 | orchestrator |  }, 2026-03-31 02:47:50.458281 | orchestrator |  "sdc": { 2026-03-31 02:47:50.458291 | orchestrator |  "osd_lvm_uuid": "da0b55d5-13d5-528b-aee2-5667f342587c" 2026-03-31 02:47:50.458300 | orchestrator |  } 2026-03-31 02:47:50.458310 | orchestrator |  } 2026-03-31 02:47:50.458319 | orchestrator | } 2026-03-31 02:47:50.458329 | orchestrator | 2026-03-31 02:47:50.458339 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-31 02:47:50.458348 | orchestrator | Tuesday 31 March 2026 02:47:47 +0000 (0:00:00.171) 0:00:30.988 ********* 2026-03-31 02:47:50.458358 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:47:50.458367 | orchestrator | 2026-03-31 02:47:50.458377 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-31 02:47:50.458387 | orchestrator | Tuesday 31 March 2026 02:47:47 +0000 (0:00:00.145) 0:00:31.133 ********* 2026-03-31 02:47:50.458396 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:47:50.458406 | orchestrator | 2026-03-31 02:47:50.458415 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-31 02:47:50.458425 | orchestrator | Tuesday 31 March 2026 02:47:47 +0000 (0:00:00.168) 0:00:31.302 ********* 2026-03-31 02:47:50.458434 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:47:50.458444 | orchestrator | 2026-03-31 02:47:50.458453 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-31 02:47:50.458463 | orchestrator | Tuesday 31 March 2026 02:47:47 +0000 (0:00:00.150) 0:00:31.452 ********* 2026-03-31 02:47:50.458472 | orchestrator | changed: [testbed-node-4] => { 2026-03-31 02:47:50.458482 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-31 02:47:50.458492 | orchestrator |  "ceph_osd_devices": { 2026-03-31 02:47:50.458501 | orchestrator |  "sdb": { 2026-03-31 02:47:50.458511 | orchestrator |  "osd_lvm_uuid": "ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb" 2026-03-31 02:47:50.458520 | orchestrator |  }, 2026-03-31 02:47:50.458557 | orchestrator |  "sdc": { 2026-03-31 02:47:50.458568 | orchestrator |  "osd_lvm_uuid": "da0b55d5-13d5-528b-aee2-5667f342587c" 2026-03-31 02:47:50.458578 | orchestrator |  } 2026-03-31 02:47:50.458587 | orchestrator |  }, 2026-03-31 02:47:50.458597 | orchestrator |  "lvm_volumes": [ 2026-03-31 02:47:50.458606 | orchestrator |  { 2026-03-31 02:47:50.458616 | orchestrator |  "data": "osd-block-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb", 2026-03-31 02:47:50.458625 | orchestrator |  "data_vg": "ceph-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb" 2026-03-31 02:47:50.458635 | orchestrator |  }, 2026-03-31 02:47:50.458644 | orchestrator |  { 2026-03-31 02:47:50.458654 | orchestrator |  "data": "osd-block-da0b55d5-13d5-528b-aee2-5667f342587c", 2026-03-31 02:47:50.458663 | orchestrator |  "data_vg": "ceph-da0b55d5-13d5-528b-aee2-5667f342587c" 2026-03-31 02:47:50.458672 | orchestrator |  } 2026-03-31 02:47:50.458682 | orchestrator |  ] 2026-03-31 02:47:50.458692 | orchestrator |  } 2026-03-31 02:47:50.458701 | orchestrator | } 2026-03-31 02:47:50.458711 | orchestrator | 2026-03-31 02:47:50.458721 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-31 02:47:50.458730 | orchestrator | Tuesday 31 March 2026 02:47:48 +0000 (0:00:00.463) 0:00:31.916 ********* 2026-03-31 02:47:50.458740 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-31 02:47:50.458750 | orchestrator | 2026-03-31 02:47:50.458759 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-31 02:47:50.458769 | orchestrator | 2026-03-31 02:47:50.458778 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-31 02:47:50.458788 | orchestrator | Tuesday 31 March 2026 02:47:49 +0000 (0:00:01.275) 0:00:33.191 ********* 2026-03-31 02:47:50.458805 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-31 02:47:50.458814 | orchestrator | 2026-03-31 02:47:50.458824 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-31 02:47:50.458834 | orchestrator | Tuesday 31 March 2026 02:47:49 +0000 (0:00:00.304) 0:00:33.496 ********* 2026-03-31 02:47:50.458843 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:47:50.458853 | orchestrator | 2026-03-31 02:47:50.458863 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:47:50.458872 | orchestrator | Tuesday 31 March 2026 02:47:50 +0000 (0:00:00.281) 0:00:33.777 ********* 2026-03-31 02:47:50.458882 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-31 02:47:50.458891 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-31 02:47:50.458901 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-31 02:47:50.458910 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-31 02:47:50.458920 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-31 02:47:50.458938 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-31 02:48:00.317464 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-31 02:48:00.317584 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-31 02:48:00.317595 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-31 02:48:00.317602 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-31 02:48:00.317620 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-31 02:48:00.317626 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-31 02:48:00.317632 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-31 02:48:00.317638 | orchestrator | 2026-03-31 02:48:00.317645 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:48:00.317652 | orchestrator | Tuesday 31 March 2026 02:47:50 +0000 (0:00:00.399) 0:00:34.177 ********* 2026-03-31 02:48:00.317683 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:48:00.317692 | orchestrator | 2026-03-31 02:48:00.317698 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:48:00.317704 | orchestrator | Tuesday 31 March 2026 02:47:50 +0000 (0:00:00.246) 0:00:34.424 ********* 2026-03-31 02:48:00.317710 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:48:00.317716 | orchestrator | 2026-03-31 02:48:00.317722 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:48:00.317728 | orchestrator | Tuesday 31 March 2026 02:47:50 +0000 (0:00:00.214) 0:00:34.638 ********* 2026-03-31 02:48:00.317734 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:48:00.317740 | orchestrator | 2026-03-31 02:48:00.317746 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:48:00.317752 | orchestrator | Tuesday 31 March 2026 02:47:51 +0000 (0:00:00.231) 0:00:34.870 ********* 2026-03-31 02:48:00.317757 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:48:00.317763 | orchestrator | 2026-03-31 02:48:00.317769 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:48:00.317775 | orchestrator | Tuesday 31 March 2026 02:47:51 +0000 (0:00:00.674) 0:00:35.544 ********* 2026-03-31 02:48:00.317781 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:48:00.317787 | orchestrator | 2026-03-31 02:48:00.317793 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:48:00.317799 | orchestrator | Tuesday 31 March 2026 02:47:52 +0000 (0:00:00.209) 0:00:35.754 ********* 2026-03-31 02:48:00.317822 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:48:00.317828 | orchestrator | 2026-03-31 02:48:00.317834 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:48:00.317840 | orchestrator | Tuesday 31 March 2026 02:47:52 +0000 (0:00:00.243) 0:00:35.997 ********* 2026-03-31 02:48:00.317846 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:48:00.317851 | orchestrator | 2026-03-31 02:48:00.317857 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:48:00.317863 | orchestrator | Tuesday 31 March 2026 02:47:52 +0000 (0:00:00.230) 0:00:36.227 ********* 2026-03-31 02:48:00.317868 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:48:00.317874 | orchestrator | 2026-03-31 02:48:00.317880 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:48:00.317886 | orchestrator | Tuesday 31 March 2026 02:47:52 +0000 (0:00:00.209) 0:00:36.437 ********* 2026-03-31 02:48:00.317891 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126) 2026-03-31 02:48:00.317897 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126) 2026-03-31 02:48:00.317903 | orchestrator | 2026-03-31 02:48:00.317909 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:48:00.317914 | orchestrator | Tuesday 31 March 2026 02:47:53 +0000 (0:00:00.485) 0:00:36.923 ********* 2026-03-31 02:48:00.317920 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_cee620fc-9fd6-4c5e-b237-9b955e0088ae) 2026-03-31 02:48:00.317926 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_cee620fc-9fd6-4c5e-b237-9b955e0088ae) 2026-03-31 02:48:00.317932 | orchestrator | 2026-03-31 02:48:00.317938 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:48:00.317943 | orchestrator | Tuesday 31 March 2026 02:47:53 +0000 (0:00:00.446) 0:00:37.369 ********* 2026-03-31 02:48:00.317949 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0036be6c-41d0-4a1c-804a-c8bed222bda7) 2026-03-31 02:48:00.317955 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0036be6c-41d0-4a1c-804a-c8bed222bda7) 2026-03-31 02:48:00.317961 | orchestrator | 2026-03-31 02:48:00.317966 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:48:00.317972 | orchestrator | Tuesday 31 March 2026 02:47:54 +0000 (0:00:00.511) 0:00:37.881 ********* 2026-03-31 02:48:00.317978 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d1382055-b12a-4a0d-90b0-6b0bf5b2002d) 2026-03-31 02:48:00.317984 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d1382055-b12a-4a0d-90b0-6b0bf5b2002d) 2026-03-31 02:48:00.317990 | orchestrator | 2026-03-31 02:48:00.317995 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:48:00.318001 | orchestrator | Tuesday 31 March 2026 02:47:54 +0000 (0:00:00.592) 0:00:38.473 ********* 2026-03-31 02:48:00.318007 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-31 02:48:00.318013 | orchestrator | 2026-03-31 02:48:00.318056 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:48:00.318077 | orchestrator | Tuesday 31 March 2026 02:47:55 +0000 (0:00:00.492) 0:00:38.966 ********* 2026-03-31 02:48:00.318084 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-31 02:48:00.318091 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-31 02:48:00.318098 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-31 02:48:00.318109 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-31 02:48:00.318115 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-31 02:48:00.318122 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-31 02:48:00.318135 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-31 02:48:00.318142 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-31 02:48:00.318149 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-31 02:48:00.318155 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-31 02:48:00.318162 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-31 02:48:00.318168 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-31 02:48:00.318175 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-31 02:48:00.318181 | orchestrator | 2026-03-31 02:48:00.318188 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:48:00.318194 | orchestrator | Tuesday 31 March 2026 02:47:56 +0000 (0:00:00.768) 0:00:39.734 ********* 2026-03-31 02:48:00.318201 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:48:00.318208 | orchestrator | 2026-03-31 02:48:00.318214 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:48:00.318221 | orchestrator | Tuesday 31 March 2026 02:47:56 +0000 (0:00:00.235) 0:00:39.970 ********* 2026-03-31 02:48:00.318228 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:48:00.318235 | orchestrator | 2026-03-31 02:48:00.318242 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:48:00.318248 | orchestrator | Tuesday 31 March 2026 02:47:56 +0000 (0:00:00.239) 0:00:40.209 ********* 2026-03-31 02:48:00.318255 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:48:00.318262 | orchestrator | 2026-03-31 02:48:00.318269 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:48:00.318275 | orchestrator | Tuesday 31 March 2026 02:47:56 +0000 (0:00:00.271) 0:00:40.481 ********* 2026-03-31 02:48:00.318282 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:48:00.318289 | orchestrator | 2026-03-31 02:48:00.318295 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:48:00.318302 | orchestrator | Tuesday 31 March 2026 02:47:56 +0000 (0:00:00.227) 0:00:40.708 ********* 2026-03-31 02:48:00.318309 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:48:00.318316 | orchestrator | 2026-03-31 02:48:00.318323 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:48:00.318329 | orchestrator | Tuesday 31 March 2026 02:47:57 +0000 (0:00:00.196) 0:00:40.905 ********* 2026-03-31 02:48:00.318336 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:48:00.318343 | orchestrator | 2026-03-31 02:48:00.318349 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:48:00.318355 | orchestrator | Tuesday 31 March 2026 02:47:57 +0000 (0:00:00.253) 0:00:41.159 ********* 2026-03-31 02:48:00.318361 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:48:00.318368 | orchestrator | 2026-03-31 02:48:00.318378 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:48:00.318386 | orchestrator | Tuesday 31 March 2026 02:47:57 +0000 (0:00:00.212) 0:00:41.371 ********* 2026-03-31 02:48:00.318396 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:48:00.318405 | orchestrator | 2026-03-31 02:48:00.318415 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:48:00.318424 | orchestrator | Tuesday 31 March 2026 02:47:57 +0000 (0:00:00.235) 0:00:41.607 ********* 2026-03-31 02:48:00.318434 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-31 02:48:00.318443 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-31 02:48:00.318452 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-31 02:48:00.318460 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-31 02:48:00.318469 | orchestrator | 2026-03-31 02:48:00.318485 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:48:00.318495 | orchestrator | Tuesday 31 March 2026 02:47:58 +0000 (0:00:00.960) 0:00:42.567 ********* 2026-03-31 02:48:00.318503 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:48:00.318511 | orchestrator | 2026-03-31 02:48:00.318521 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:48:00.318531 | orchestrator | Tuesday 31 March 2026 02:47:59 +0000 (0:00:00.238) 0:00:42.806 ********* 2026-03-31 02:48:00.318587 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:48:00.318600 | orchestrator | 2026-03-31 02:48:00.318609 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:48:00.318619 | orchestrator | Tuesday 31 March 2026 02:47:59 +0000 (0:00:00.237) 0:00:43.044 ********* 2026-03-31 02:48:00.318628 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:48:00.318637 | orchestrator | 2026-03-31 02:48:00.318647 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:48:00.318659 | orchestrator | Tuesday 31 March 2026 02:48:00 +0000 (0:00:00.782) 0:00:43.826 ********* 2026-03-31 02:48:00.318670 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:48:00.318680 | orchestrator | 2026-03-31 02:48:00.318699 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-31 02:48:04.823811 | orchestrator | Tuesday 31 March 2026 02:48:00 +0000 (0:00:00.213) 0:00:44.039 ********* 2026-03-31 02:48:04.823895 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-03-31 02:48:04.823910 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-03-31 02:48:04.823923 | orchestrator | 2026-03-31 02:48:04.823935 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-31 02:48:04.823966 | orchestrator | Tuesday 31 March 2026 02:48:00 +0000 (0:00:00.196) 0:00:44.236 ********* 2026-03-31 02:48:04.823979 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:48:04.823992 | orchestrator | 2026-03-31 02:48:04.824005 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-31 02:48:04.824019 | orchestrator | Tuesday 31 March 2026 02:48:00 +0000 (0:00:00.151) 0:00:44.387 ********* 2026-03-31 02:48:04.824033 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:48:04.824042 | orchestrator | 2026-03-31 02:48:04.824049 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-31 02:48:04.824056 | orchestrator | Tuesday 31 March 2026 02:48:00 +0000 (0:00:00.165) 0:00:44.553 ********* 2026-03-31 02:48:04.824063 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:48:04.824071 | orchestrator | 2026-03-31 02:48:04.824078 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-31 02:48:04.824086 | orchestrator | Tuesday 31 March 2026 02:48:00 +0000 (0:00:00.143) 0:00:44.696 ********* 2026-03-31 02:48:04.824093 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:48:04.824101 | orchestrator | 2026-03-31 02:48:04.824108 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-31 02:48:04.824116 | orchestrator | Tuesday 31 March 2026 02:48:01 +0000 (0:00:00.169) 0:00:44.865 ********* 2026-03-31 02:48:04.824123 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '07ced279-a583-5107-8220-95f80fc10ac7'}}) 2026-03-31 02:48:04.824131 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '185c377e-da3e-5428-98db-747be321d2f9'}}) 2026-03-31 02:48:04.824138 | orchestrator | 2026-03-31 02:48:04.824145 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-31 02:48:04.824153 | orchestrator | Tuesday 31 March 2026 02:48:01 +0000 (0:00:00.215) 0:00:45.081 ********* 2026-03-31 02:48:04.824163 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '07ced279-a583-5107-8220-95f80fc10ac7'}})  2026-03-31 02:48:04.824176 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '185c377e-da3e-5428-98db-747be321d2f9'}})  2026-03-31 02:48:04.824188 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:48:04.824224 | orchestrator | 2026-03-31 02:48:04.824233 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-31 02:48:04.824240 | orchestrator | Tuesday 31 March 2026 02:48:01 +0000 (0:00:00.140) 0:00:45.222 ********* 2026-03-31 02:48:04.824248 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '07ced279-a583-5107-8220-95f80fc10ac7'}})  2026-03-31 02:48:04.824255 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '185c377e-da3e-5428-98db-747be321d2f9'}})  2026-03-31 02:48:04.824262 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:48:04.824270 | orchestrator | 2026-03-31 02:48:04.824277 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-31 02:48:04.824284 | orchestrator | Tuesday 31 March 2026 02:48:01 +0000 (0:00:00.180) 0:00:45.402 ********* 2026-03-31 02:48:04.824291 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '07ced279-a583-5107-8220-95f80fc10ac7'}})  2026-03-31 02:48:04.824299 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '185c377e-da3e-5428-98db-747be321d2f9'}})  2026-03-31 02:48:04.824306 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:48:04.824313 | orchestrator | 2026-03-31 02:48:04.824320 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-31 02:48:04.824328 | orchestrator | Tuesday 31 March 2026 02:48:01 +0000 (0:00:00.184) 0:00:45.586 ********* 2026-03-31 02:48:04.824335 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:48:04.824342 | orchestrator | 2026-03-31 02:48:04.824349 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-31 02:48:04.824357 | orchestrator | Tuesday 31 March 2026 02:48:02 +0000 (0:00:00.150) 0:00:45.737 ********* 2026-03-31 02:48:04.824365 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:48:04.824374 | orchestrator | 2026-03-31 02:48:04.824382 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-31 02:48:04.824390 | orchestrator | Tuesday 31 March 2026 02:48:02 +0000 (0:00:00.401) 0:00:46.139 ********* 2026-03-31 02:48:04.824399 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:48:04.824407 | orchestrator | 2026-03-31 02:48:04.824416 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-31 02:48:04.824424 | orchestrator | Tuesday 31 March 2026 02:48:02 +0000 (0:00:00.139) 0:00:46.278 ********* 2026-03-31 02:48:04.824432 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:48:04.824440 | orchestrator | 2026-03-31 02:48:04.824448 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-31 02:48:04.824457 | orchestrator | Tuesday 31 March 2026 02:48:02 +0000 (0:00:00.163) 0:00:46.441 ********* 2026-03-31 02:48:04.824465 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:48:04.824473 | orchestrator | 2026-03-31 02:48:04.824482 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-31 02:48:04.824490 | orchestrator | Tuesday 31 March 2026 02:48:02 +0000 (0:00:00.128) 0:00:46.569 ********* 2026-03-31 02:48:04.824498 | orchestrator | ok: [testbed-node-5] => { 2026-03-31 02:48:04.824507 | orchestrator |  "ceph_osd_devices": { 2026-03-31 02:48:04.824515 | orchestrator |  "sdb": { 2026-03-31 02:48:04.824538 | orchestrator |  "osd_lvm_uuid": "07ced279-a583-5107-8220-95f80fc10ac7" 2026-03-31 02:48:04.824577 | orchestrator |  }, 2026-03-31 02:48:04.824585 | orchestrator |  "sdc": { 2026-03-31 02:48:04.824603 | orchestrator |  "osd_lvm_uuid": "185c377e-da3e-5428-98db-747be321d2f9" 2026-03-31 02:48:04.824611 | orchestrator |  } 2026-03-31 02:48:04.824620 | orchestrator |  } 2026-03-31 02:48:04.824628 | orchestrator | } 2026-03-31 02:48:04.824636 | orchestrator | 2026-03-31 02:48:04.824650 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-31 02:48:04.824659 | orchestrator | Tuesday 31 March 2026 02:48:02 +0000 (0:00:00.151) 0:00:46.721 ********* 2026-03-31 02:48:04.824666 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:48:04.824681 | orchestrator | 2026-03-31 02:48:04.824690 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-31 02:48:04.824698 | orchestrator | Tuesday 31 March 2026 02:48:03 +0000 (0:00:00.141) 0:00:46.863 ********* 2026-03-31 02:48:04.824706 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:48:04.824714 | orchestrator | 2026-03-31 02:48:04.824722 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-31 02:48:04.824730 | orchestrator | Tuesday 31 March 2026 02:48:03 +0000 (0:00:00.159) 0:00:47.022 ********* 2026-03-31 02:48:04.824739 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:48:04.824747 | orchestrator | 2026-03-31 02:48:04.824754 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-31 02:48:04.824762 | orchestrator | Tuesday 31 March 2026 02:48:03 +0000 (0:00:00.161) 0:00:47.184 ********* 2026-03-31 02:48:04.824769 | orchestrator | changed: [testbed-node-5] => { 2026-03-31 02:48:04.824776 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-31 02:48:04.824783 | orchestrator |  "ceph_osd_devices": { 2026-03-31 02:48:04.824791 | orchestrator |  "sdb": { 2026-03-31 02:48:04.824798 | orchestrator |  "osd_lvm_uuid": "07ced279-a583-5107-8220-95f80fc10ac7" 2026-03-31 02:48:04.824805 | orchestrator |  }, 2026-03-31 02:48:04.824813 | orchestrator |  "sdc": { 2026-03-31 02:48:04.824820 | orchestrator |  "osd_lvm_uuid": "185c377e-da3e-5428-98db-747be321d2f9" 2026-03-31 02:48:04.824827 | orchestrator |  } 2026-03-31 02:48:04.824834 | orchestrator |  }, 2026-03-31 02:48:04.824841 | orchestrator |  "lvm_volumes": [ 2026-03-31 02:48:04.824849 | orchestrator |  { 2026-03-31 02:48:04.824856 | orchestrator |  "data": "osd-block-07ced279-a583-5107-8220-95f80fc10ac7", 2026-03-31 02:48:04.824863 | orchestrator |  "data_vg": "ceph-07ced279-a583-5107-8220-95f80fc10ac7" 2026-03-31 02:48:04.824870 | orchestrator |  }, 2026-03-31 02:48:04.824877 | orchestrator |  { 2026-03-31 02:48:04.824885 | orchestrator |  "data": "osd-block-185c377e-da3e-5428-98db-747be321d2f9", 2026-03-31 02:48:04.824892 | orchestrator |  "data_vg": "ceph-185c377e-da3e-5428-98db-747be321d2f9" 2026-03-31 02:48:04.824899 | orchestrator |  } 2026-03-31 02:48:04.824906 | orchestrator |  ] 2026-03-31 02:48:04.824913 | orchestrator |  } 2026-03-31 02:48:04.824921 | orchestrator | } 2026-03-31 02:48:04.824928 | orchestrator | 2026-03-31 02:48:04.824935 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-31 02:48:04.824942 | orchestrator | Tuesday 31 March 2026 02:48:03 +0000 (0:00:00.233) 0:00:47.418 ********* 2026-03-31 02:48:04.824949 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-31 02:48:04.824957 | orchestrator | 2026-03-31 02:48:04.824964 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 02:48:04.824971 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-31 02:48:04.824980 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-31 02:48:04.824987 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-31 02:48:04.824994 | orchestrator | 2026-03-31 02:48:04.825002 | orchestrator | 2026-03-31 02:48:04.825009 | orchestrator | 2026-03-31 02:48:04.825016 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 02:48:04.825023 | orchestrator | Tuesday 31 March 2026 02:48:04 +0000 (0:00:01.110) 0:00:48.528 ********* 2026-03-31 02:48:04.825030 | orchestrator | =============================================================================== 2026-03-31 02:48:04.825038 | orchestrator | Write configuration file ------------------------------------------------ 4.33s 2026-03-31 02:48:04.825051 | orchestrator | Add known partitions to the list of available block devices ------------- 2.32s 2026-03-31 02:48:04.825058 | orchestrator | Add known links to the list of available block devices ------------------ 1.38s 2026-03-31 02:48:04.825065 | orchestrator | Print configuration data ------------------------------------------------ 1.18s 2026-03-31 02:48:04.825072 | orchestrator | Add known partitions to the list of available block devices ------------- 1.12s 2026-03-31 02:48:04.825080 | orchestrator | Add known partitions to the list of available block devices ------------- 0.99s 2026-03-31 02:48:04.825087 | orchestrator | Add known partitions to the list of available block devices ------------- 0.96s 2026-03-31 02:48:04.825094 | orchestrator | Add known links to the list of available block devices ------------------ 0.94s 2026-03-31 02:48:04.825101 | orchestrator | Add known links to the list of available block devices ------------------ 0.87s 2026-03-31 02:48:04.825108 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.87s 2026-03-31 02:48:04.825115 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.79s 2026-03-31 02:48:04.825123 | orchestrator | Get initial list of available block devices ----------------------------- 0.79s 2026-03-31 02:48:04.825130 | orchestrator | Add known partitions to the list of available block devices ------------- 0.78s 2026-03-31 02:48:04.825142 | orchestrator | Add known links to the list of available block devices ------------------ 0.76s 2026-03-31 02:48:05.485274 | orchestrator | Add known partitions to the list of available block devices ------------- 0.72s 2026-03-31 02:48:05.485349 | orchestrator | Set OSD devices config data --------------------------------------------- 0.70s 2026-03-31 02:48:05.485356 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2026-03-31 02:48:05.485372 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2026-03-31 02:48:05.485377 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2026-03-31 02:48:05.485381 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.69s 2026-03-31 02:48:28.235074 | orchestrator | 2026-03-31 02:48:28 | INFO  | Task aaddf9fb-3f23-4657-b229-84e959893e56 (sync inventory) is running in background. Output coming soon. 2026-03-31 02:48:58.913905 | orchestrator | 2026-03-31 02:48:29 | INFO  | Starting group_vars file reorganization 2026-03-31 02:48:58.913997 | orchestrator | 2026-03-31 02:48:29 | INFO  | Moved 0 file(s) to their respective directories 2026-03-31 02:48:58.914007 | orchestrator | 2026-03-31 02:48:29 | INFO  | Group_vars file reorganization completed 2026-03-31 02:48:58.914058 | orchestrator | 2026-03-31 02:48:32 | INFO  | Starting variable preparation from inventory 2026-03-31 02:48:58.914071 | orchestrator | 2026-03-31 02:48:36 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-03-31 02:48:58.914081 | orchestrator | 2026-03-31 02:48:36 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-03-31 02:48:58.914092 | orchestrator | 2026-03-31 02:48:36 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-03-31 02:48:58.914103 | orchestrator | 2026-03-31 02:48:36 | INFO  | 3 file(s) written, 6 host(s) processed 2026-03-31 02:48:58.914111 | orchestrator | 2026-03-31 02:48:36 | INFO  | Variable preparation completed 2026-03-31 02:48:58.914117 | orchestrator | 2026-03-31 02:48:37 | INFO  | Starting inventory overwrite handling 2026-03-31 02:48:58.914123 | orchestrator | 2026-03-31 02:48:37 | INFO  | Handling group overwrites in 99-overwrite 2026-03-31 02:48:58.914129 | orchestrator | 2026-03-31 02:48:37 | INFO  | Removing group frr:children from 60-generic 2026-03-31 02:48:58.914134 | orchestrator | 2026-03-31 02:48:37 | INFO  | Removing group netbird:children from 50-infrastructure 2026-03-31 02:48:58.914140 | orchestrator | 2026-03-31 02:48:37 | INFO  | Removing group ceph-mds from 50-ceph 2026-03-31 02:48:58.914166 | orchestrator | 2026-03-31 02:48:37 | INFO  | Removing group ceph-rgw from 50-ceph 2026-03-31 02:48:58.914172 | orchestrator | 2026-03-31 02:48:37 | INFO  | Handling group overwrites in 20-roles 2026-03-31 02:48:58.914178 | orchestrator | 2026-03-31 02:48:37 | INFO  | Removing group k3s_node from 50-infrastructure 2026-03-31 02:48:58.914183 | orchestrator | 2026-03-31 02:48:37 | INFO  | Removed 5 group(s) in total 2026-03-31 02:48:58.914189 | orchestrator | 2026-03-31 02:48:37 | INFO  | Inventory overwrite handling completed 2026-03-31 02:48:58.914194 | orchestrator | 2026-03-31 02:48:39 | INFO  | Starting merge of inventory files 2026-03-31 02:48:58.914200 | orchestrator | 2026-03-31 02:48:39 | INFO  | Inventory files merged successfully 2026-03-31 02:48:58.914205 | orchestrator | 2026-03-31 02:48:44 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-03-31 02:48:58.914210 | orchestrator | 2026-03-31 02:48:57 | INFO  | Successfully wrote ClusterShell configuration 2026-03-31 02:48:58.914216 | orchestrator | [master 640eb41] 2026-03-31-02-48 2026-03-31 02:48:58.914223 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-03-31 02:49:01.384823 | orchestrator | 2026-03-31 02:49:01 | INFO  | Task 68a1e141-9b8b-445e-98cf-50d49d994e2f (ceph-create-lvm-devices) was prepared for execution. 2026-03-31 02:49:01.384931 | orchestrator | 2026-03-31 02:49:01 | INFO  | It takes a moment until task 68a1e141-9b8b-445e-98cf-50d49d994e2f (ceph-create-lvm-devices) has been started and output is visible here. 2026-03-31 02:49:14.229502 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-31 02:49:14.229875 | orchestrator | 2.16.14 2026-03-31 02:49:14.229911 | orchestrator | 2026-03-31 02:49:14.229928 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-31 02:49:14.229945 | orchestrator | 2026-03-31 02:49:14.229960 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-31 02:49:14.229975 | orchestrator | Tuesday 31 March 2026 02:49:06 +0000 (0:00:00.348) 0:00:00.348 ********* 2026-03-31 02:49:14.229992 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-31 02:49:14.230007 | orchestrator | 2026-03-31 02:49:14.230098 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-31 02:49:14.230117 | orchestrator | Tuesday 31 March 2026 02:49:06 +0000 (0:00:00.263) 0:00:00.612 ********* 2026-03-31 02:49:14.230132 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:49:14.230146 | orchestrator | 2026-03-31 02:49:14.230161 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:49:14.230177 | orchestrator | Tuesday 31 March 2026 02:49:06 +0000 (0:00:00.245) 0:00:00.857 ********* 2026-03-31 02:49:14.230193 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-31 02:49:14.230209 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-31 02:49:14.230245 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-31 02:49:14.230261 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-31 02:49:14.230276 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-31 02:49:14.230291 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-31 02:49:14.230308 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-31 02:49:14.230323 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-31 02:49:14.230339 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-31 02:49:14.230354 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-31 02:49:14.230401 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-31 02:49:14.230419 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-31 02:49:14.230436 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-31 02:49:14.230453 | orchestrator | 2026-03-31 02:49:14.230469 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:49:14.230486 | orchestrator | Tuesday 31 March 2026 02:49:07 +0000 (0:00:00.547) 0:00:01.405 ********* 2026-03-31 02:49:14.230503 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:14.230520 | orchestrator | 2026-03-31 02:49:14.230536 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:49:14.230554 | orchestrator | Tuesday 31 March 2026 02:49:07 +0000 (0:00:00.233) 0:00:01.639 ********* 2026-03-31 02:49:14.230570 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:14.230587 | orchestrator | 2026-03-31 02:49:14.230604 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:49:14.230693 | orchestrator | Tuesday 31 March 2026 02:49:07 +0000 (0:00:00.212) 0:00:01.851 ********* 2026-03-31 02:49:14.230710 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:14.230727 | orchestrator | 2026-03-31 02:49:14.230746 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:49:14.230762 | orchestrator | Tuesday 31 March 2026 02:49:07 +0000 (0:00:00.218) 0:00:02.070 ********* 2026-03-31 02:49:14.230781 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:14.230798 | orchestrator | 2026-03-31 02:49:14.230815 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:49:14.230832 | orchestrator | Tuesday 31 March 2026 02:49:08 +0000 (0:00:00.218) 0:00:02.288 ********* 2026-03-31 02:49:14.230849 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:14.230866 | orchestrator | 2026-03-31 02:49:14.230883 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:49:14.230900 | orchestrator | Tuesday 31 March 2026 02:49:08 +0000 (0:00:00.226) 0:00:02.515 ********* 2026-03-31 02:49:14.230916 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:14.230932 | orchestrator | 2026-03-31 02:49:14.230947 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:49:14.230962 | orchestrator | Tuesday 31 March 2026 02:49:08 +0000 (0:00:00.238) 0:00:02.753 ********* 2026-03-31 02:49:14.230976 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:14.230996 | orchestrator | 2026-03-31 02:49:14.231013 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:49:14.231029 | orchestrator | Tuesday 31 March 2026 02:49:08 +0000 (0:00:00.206) 0:00:02.960 ********* 2026-03-31 02:49:14.231044 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:14.231061 | orchestrator | 2026-03-31 02:49:14.231078 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:49:14.231095 | orchestrator | Tuesday 31 March 2026 02:49:09 +0000 (0:00:00.223) 0:00:03.184 ********* 2026-03-31 02:49:14.231111 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047) 2026-03-31 02:49:14.231128 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047) 2026-03-31 02:49:14.231142 | orchestrator | 2026-03-31 02:49:14.231156 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:49:14.231202 | orchestrator | Tuesday 31 March 2026 02:49:09 +0000 (0:00:00.430) 0:00:03.615 ********* 2026-03-31 02:49:14.231221 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_820fa545-b298-47e1-b072-447ef233e5c9) 2026-03-31 02:49:14.231237 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_820fa545-b298-47e1-b072-447ef233e5c9) 2026-03-31 02:49:14.231253 | orchestrator | 2026-03-31 02:49:14.231269 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:49:14.231303 | orchestrator | Tuesday 31 March 2026 02:49:10 +0000 (0:00:00.703) 0:00:04.318 ********* 2026-03-31 02:49:14.231321 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c466d3ef-6614-47a1-86d1-ef83336ce84c) 2026-03-31 02:49:14.231336 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c466d3ef-6614-47a1-86d1-ef83336ce84c) 2026-03-31 02:49:14.231352 | orchestrator | 2026-03-31 02:49:14.231368 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:49:14.231384 | orchestrator | Tuesday 31 March 2026 02:49:10 +0000 (0:00:00.693) 0:00:05.012 ********* 2026-03-31 02:49:14.231399 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_a878a648-90f8-45a8-8930-74e801ae2e4e) 2026-03-31 02:49:14.231428 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_a878a648-90f8-45a8-8930-74e801ae2e4e) 2026-03-31 02:49:14.231445 | orchestrator | 2026-03-31 02:49:14.231461 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:49:14.231478 | orchestrator | Tuesday 31 March 2026 02:49:11 +0000 (0:00:00.965) 0:00:05.978 ********* 2026-03-31 02:49:14.231494 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-31 02:49:14.231512 | orchestrator | 2026-03-31 02:49:14.231527 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:49:14.231543 | orchestrator | Tuesday 31 March 2026 02:49:12 +0000 (0:00:00.365) 0:00:06.344 ********* 2026-03-31 02:49:14.231558 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-31 02:49:14.231574 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-31 02:49:14.231590 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-31 02:49:14.231632 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-31 02:49:14.231651 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-31 02:49:14.231668 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-31 02:49:14.231684 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-31 02:49:14.231700 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-31 02:49:14.231716 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-31 02:49:14.231731 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-31 02:49:14.231748 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-31 02:49:14.231764 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-31 02:49:14.231781 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-31 02:49:14.231798 | orchestrator | 2026-03-31 02:49:14.231814 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:49:14.231830 | orchestrator | Tuesday 31 March 2026 02:49:12 +0000 (0:00:00.425) 0:00:06.769 ********* 2026-03-31 02:49:14.231840 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:14.231850 | orchestrator | 2026-03-31 02:49:14.231859 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:49:14.231869 | orchestrator | Tuesday 31 March 2026 02:49:12 +0000 (0:00:00.193) 0:00:06.962 ********* 2026-03-31 02:49:14.231878 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:14.231895 | orchestrator | 2026-03-31 02:49:14.231911 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:49:14.231926 | orchestrator | Tuesday 31 March 2026 02:49:13 +0000 (0:00:00.206) 0:00:07.169 ********* 2026-03-31 02:49:14.231942 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:14.231973 | orchestrator | 2026-03-31 02:49:14.231990 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:49:14.232005 | orchestrator | Tuesday 31 March 2026 02:49:13 +0000 (0:00:00.206) 0:00:07.376 ********* 2026-03-31 02:49:14.232022 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:14.232039 | orchestrator | 2026-03-31 02:49:14.232054 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:49:14.232070 | orchestrator | Tuesday 31 March 2026 02:49:13 +0000 (0:00:00.209) 0:00:07.585 ********* 2026-03-31 02:49:14.232087 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:14.232104 | orchestrator | 2026-03-31 02:49:14.232120 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:49:14.232135 | orchestrator | Tuesday 31 March 2026 02:49:13 +0000 (0:00:00.275) 0:00:07.861 ********* 2026-03-31 02:49:14.232152 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:14.232169 | orchestrator | 2026-03-31 02:49:14.232183 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:49:14.232200 | orchestrator | Tuesday 31 March 2026 02:49:13 +0000 (0:00:00.220) 0:00:08.081 ********* 2026-03-31 02:49:14.232218 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:14.232233 | orchestrator | 2026-03-31 02:49:14.232265 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:49:22.999744 | orchestrator | Tuesday 31 March 2026 02:49:14 +0000 (0:00:00.228) 0:00:08.310 ********* 2026-03-31 02:49:22.999885 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:22.999912 | orchestrator | 2026-03-31 02:49:22.999935 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:49:22.999956 | orchestrator | Tuesday 31 March 2026 02:49:14 +0000 (0:00:00.704) 0:00:09.015 ********* 2026-03-31 02:49:22.999977 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-31 02:49:22.999998 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-31 02:49:23.000019 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-31 02:49:23.000041 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-31 02:49:23.000061 | orchestrator | 2026-03-31 02:49:23.000082 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:49:23.000101 | orchestrator | Tuesday 31 March 2026 02:49:15 +0000 (0:00:00.725) 0:00:09.740 ********* 2026-03-31 02:49:23.000123 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:23.000145 | orchestrator | 2026-03-31 02:49:23.000166 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:49:23.000186 | orchestrator | Tuesday 31 March 2026 02:49:15 +0000 (0:00:00.208) 0:00:09.948 ********* 2026-03-31 02:49:23.000207 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:23.000228 | orchestrator | 2026-03-31 02:49:23.000274 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:49:23.000299 | orchestrator | Tuesday 31 March 2026 02:49:16 +0000 (0:00:00.271) 0:00:10.220 ********* 2026-03-31 02:49:23.000320 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:23.000342 | orchestrator | 2026-03-31 02:49:23.000366 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:49:23.000389 | orchestrator | Tuesday 31 March 2026 02:49:16 +0000 (0:00:00.223) 0:00:10.444 ********* 2026-03-31 02:49:23.000408 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:23.000428 | orchestrator | 2026-03-31 02:49:23.000450 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-31 02:49:23.000473 | orchestrator | Tuesday 31 March 2026 02:49:16 +0000 (0:00:00.226) 0:00:10.670 ********* 2026-03-31 02:49:23.000498 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:23.000517 | orchestrator | 2026-03-31 02:49:23.000538 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-31 02:49:23.000559 | orchestrator | Tuesday 31 March 2026 02:49:16 +0000 (0:00:00.131) 0:00:10.801 ********* 2026-03-31 02:49:23.000582 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'dad98f55-09f4-5a2b-a5c7-aafce2660c53'}}) 2026-03-31 02:49:23.000687 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '67174221-9040-517a-ae84-daf8ebd704d7'}}) 2026-03-31 02:49:23.000710 | orchestrator | 2026-03-31 02:49:23.000730 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-31 02:49:23.000750 | orchestrator | Tuesday 31 March 2026 02:49:16 +0000 (0:00:00.236) 0:00:11.038 ********* 2026-03-31 02:49:23.000770 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-dad98f55-09f4-5a2b-a5c7-aafce2660c53', 'data_vg': 'ceph-dad98f55-09f4-5a2b-a5c7-aafce2660c53'}) 2026-03-31 02:49:23.000791 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-67174221-9040-517a-ae84-daf8ebd704d7', 'data_vg': 'ceph-67174221-9040-517a-ae84-daf8ebd704d7'}) 2026-03-31 02:49:23.000811 | orchestrator | 2026-03-31 02:49:23.000831 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-31 02:49:23.000849 | orchestrator | Tuesday 31 March 2026 02:49:19 +0000 (0:00:02.099) 0:00:13.138 ********* 2026-03-31 02:49:23.000868 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dad98f55-09f4-5a2b-a5c7-aafce2660c53', 'data_vg': 'ceph-dad98f55-09f4-5a2b-a5c7-aafce2660c53'})  2026-03-31 02:49:23.000887 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-67174221-9040-517a-ae84-daf8ebd704d7', 'data_vg': 'ceph-67174221-9040-517a-ae84-daf8ebd704d7'})  2026-03-31 02:49:23.000906 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:23.000925 | orchestrator | 2026-03-31 02:49:23.000944 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-31 02:49:23.000964 | orchestrator | Tuesday 31 March 2026 02:49:19 +0000 (0:00:00.162) 0:00:13.300 ********* 2026-03-31 02:49:23.000983 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-dad98f55-09f4-5a2b-a5c7-aafce2660c53', 'data_vg': 'ceph-dad98f55-09f4-5a2b-a5c7-aafce2660c53'}) 2026-03-31 02:49:23.001001 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-67174221-9040-517a-ae84-daf8ebd704d7', 'data_vg': 'ceph-67174221-9040-517a-ae84-daf8ebd704d7'}) 2026-03-31 02:49:23.001020 | orchestrator | 2026-03-31 02:49:23.001040 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-31 02:49:23.001058 | orchestrator | Tuesday 31 March 2026 02:49:20 +0000 (0:00:01.572) 0:00:14.873 ********* 2026-03-31 02:49:23.001076 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dad98f55-09f4-5a2b-a5c7-aafce2660c53', 'data_vg': 'ceph-dad98f55-09f4-5a2b-a5c7-aafce2660c53'})  2026-03-31 02:49:23.001094 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-67174221-9040-517a-ae84-daf8ebd704d7', 'data_vg': 'ceph-67174221-9040-517a-ae84-daf8ebd704d7'})  2026-03-31 02:49:23.001111 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:23.001128 | orchestrator | 2026-03-31 02:49:23.001147 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-31 02:49:23.001165 | orchestrator | Tuesday 31 March 2026 02:49:20 +0000 (0:00:00.165) 0:00:15.038 ********* 2026-03-31 02:49:23.001211 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:23.001231 | orchestrator | 2026-03-31 02:49:23.001250 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-31 02:49:23.001268 | orchestrator | Tuesday 31 March 2026 02:49:21 +0000 (0:00:00.375) 0:00:15.414 ********* 2026-03-31 02:49:23.001288 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dad98f55-09f4-5a2b-a5c7-aafce2660c53', 'data_vg': 'ceph-dad98f55-09f4-5a2b-a5c7-aafce2660c53'})  2026-03-31 02:49:23.001306 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-67174221-9040-517a-ae84-daf8ebd704d7', 'data_vg': 'ceph-67174221-9040-517a-ae84-daf8ebd704d7'})  2026-03-31 02:49:23.001325 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:23.001342 | orchestrator | 2026-03-31 02:49:23.001360 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-31 02:49:23.001379 | orchestrator | Tuesday 31 March 2026 02:49:21 +0000 (0:00:00.182) 0:00:15.597 ********* 2026-03-31 02:49:23.001414 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:23.001433 | orchestrator | 2026-03-31 02:49:23.001451 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-31 02:49:23.001470 | orchestrator | Tuesday 31 March 2026 02:49:21 +0000 (0:00:00.153) 0:00:15.751 ********* 2026-03-31 02:49:23.001499 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dad98f55-09f4-5a2b-a5c7-aafce2660c53', 'data_vg': 'ceph-dad98f55-09f4-5a2b-a5c7-aafce2660c53'})  2026-03-31 02:49:23.001518 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-67174221-9040-517a-ae84-daf8ebd704d7', 'data_vg': 'ceph-67174221-9040-517a-ae84-daf8ebd704d7'})  2026-03-31 02:49:23.001536 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:23.001554 | orchestrator | 2026-03-31 02:49:23.001573 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-31 02:49:23.001590 | orchestrator | Tuesday 31 March 2026 02:49:21 +0000 (0:00:00.156) 0:00:15.908 ********* 2026-03-31 02:49:23.001609 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:23.001662 | orchestrator | 2026-03-31 02:49:23.001682 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-31 02:49:23.001701 | orchestrator | Tuesday 31 March 2026 02:49:21 +0000 (0:00:00.156) 0:00:16.064 ********* 2026-03-31 02:49:23.001720 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dad98f55-09f4-5a2b-a5c7-aafce2660c53', 'data_vg': 'ceph-dad98f55-09f4-5a2b-a5c7-aafce2660c53'})  2026-03-31 02:49:23.001738 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-67174221-9040-517a-ae84-daf8ebd704d7', 'data_vg': 'ceph-67174221-9040-517a-ae84-daf8ebd704d7'})  2026-03-31 02:49:23.001757 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:23.001773 | orchestrator | 2026-03-31 02:49:23.001792 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-31 02:49:23.001810 | orchestrator | Tuesday 31 March 2026 02:49:22 +0000 (0:00:00.178) 0:00:16.243 ********* 2026-03-31 02:49:23.001829 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:49:23.001848 | orchestrator | 2026-03-31 02:49:23.001866 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-31 02:49:23.001880 | orchestrator | Tuesday 31 March 2026 02:49:22 +0000 (0:00:00.148) 0:00:16.391 ********* 2026-03-31 02:49:23.001891 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dad98f55-09f4-5a2b-a5c7-aafce2660c53', 'data_vg': 'ceph-dad98f55-09f4-5a2b-a5c7-aafce2660c53'})  2026-03-31 02:49:23.001902 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-67174221-9040-517a-ae84-daf8ebd704d7', 'data_vg': 'ceph-67174221-9040-517a-ae84-daf8ebd704d7'})  2026-03-31 02:49:23.001913 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:23.001924 | orchestrator | 2026-03-31 02:49:23.001935 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-31 02:49:23.001945 | orchestrator | Tuesday 31 March 2026 02:49:22 +0000 (0:00:00.190) 0:00:16.581 ********* 2026-03-31 02:49:23.001956 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dad98f55-09f4-5a2b-a5c7-aafce2660c53', 'data_vg': 'ceph-dad98f55-09f4-5a2b-a5c7-aafce2660c53'})  2026-03-31 02:49:23.001966 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-67174221-9040-517a-ae84-daf8ebd704d7', 'data_vg': 'ceph-67174221-9040-517a-ae84-daf8ebd704d7'})  2026-03-31 02:49:23.001977 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:23.001988 | orchestrator | 2026-03-31 02:49:23.001999 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-31 02:49:23.002009 | orchestrator | Tuesday 31 March 2026 02:49:22 +0000 (0:00:00.163) 0:00:16.745 ********* 2026-03-31 02:49:23.002086 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dad98f55-09f4-5a2b-a5c7-aafce2660c53', 'data_vg': 'ceph-dad98f55-09f4-5a2b-a5c7-aafce2660c53'})  2026-03-31 02:49:23.002097 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-67174221-9040-517a-ae84-daf8ebd704d7', 'data_vg': 'ceph-67174221-9040-517a-ae84-daf8ebd704d7'})  2026-03-31 02:49:23.002120 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:23.002130 | orchestrator | 2026-03-31 02:49:23.002141 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-31 02:49:23.002152 | orchestrator | Tuesday 31 March 2026 02:49:22 +0000 (0:00:00.188) 0:00:16.934 ********* 2026-03-31 02:49:23.002162 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:23.002173 | orchestrator | 2026-03-31 02:49:23.002184 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-31 02:49:23.002209 | orchestrator | Tuesday 31 March 2026 02:49:22 +0000 (0:00:00.146) 0:00:17.080 ********* 2026-03-31 02:49:29.809714 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:29.809817 | orchestrator | 2026-03-31 02:49:29.809837 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-31 02:49:29.809849 | orchestrator | Tuesday 31 March 2026 02:49:23 +0000 (0:00:00.148) 0:00:17.228 ********* 2026-03-31 02:49:29.809857 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:29.809865 | orchestrator | 2026-03-31 02:49:29.809873 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-31 02:49:29.809880 | orchestrator | Tuesday 31 March 2026 02:49:23 +0000 (0:00:00.360) 0:00:17.589 ********* 2026-03-31 02:49:29.809888 | orchestrator | ok: [testbed-node-3] => { 2026-03-31 02:49:29.809896 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-31 02:49:29.809904 | orchestrator | } 2026-03-31 02:49:29.809911 | orchestrator | 2026-03-31 02:49:29.809919 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-31 02:49:29.809926 | orchestrator | Tuesday 31 March 2026 02:49:23 +0000 (0:00:00.172) 0:00:17.762 ********* 2026-03-31 02:49:29.809933 | orchestrator | ok: [testbed-node-3] => { 2026-03-31 02:49:29.809940 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-31 02:49:29.809947 | orchestrator | } 2026-03-31 02:49:29.809954 | orchestrator | 2026-03-31 02:49:29.809961 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-31 02:49:29.809983 | orchestrator | Tuesday 31 March 2026 02:49:23 +0000 (0:00:00.154) 0:00:17.917 ********* 2026-03-31 02:49:29.809991 | orchestrator | ok: [testbed-node-3] => { 2026-03-31 02:49:29.809998 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-31 02:49:29.810006 | orchestrator | } 2026-03-31 02:49:29.810080 | orchestrator | 2026-03-31 02:49:29.810091 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-31 02:49:29.810099 | orchestrator | Tuesday 31 March 2026 02:49:23 +0000 (0:00:00.150) 0:00:18.067 ********* 2026-03-31 02:49:29.810106 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:49:29.810113 | orchestrator | 2026-03-31 02:49:29.810145 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-31 02:49:29.810153 | orchestrator | Tuesday 31 March 2026 02:49:24 +0000 (0:00:00.698) 0:00:18.765 ********* 2026-03-31 02:49:29.810161 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:49:29.810168 | orchestrator | 2026-03-31 02:49:29.810175 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-31 02:49:29.810183 | orchestrator | Tuesday 31 March 2026 02:49:25 +0000 (0:00:00.530) 0:00:19.296 ********* 2026-03-31 02:49:29.810190 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:49:29.810197 | orchestrator | 2026-03-31 02:49:29.810205 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-31 02:49:29.810213 | orchestrator | Tuesday 31 March 2026 02:49:25 +0000 (0:00:00.529) 0:00:19.825 ********* 2026-03-31 02:49:29.810222 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:49:29.810230 | orchestrator | 2026-03-31 02:49:29.810238 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-31 02:49:29.810247 | orchestrator | Tuesday 31 March 2026 02:49:25 +0000 (0:00:00.165) 0:00:19.990 ********* 2026-03-31 02:49:29.810255 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:29.810263 | orchestrator | 2026-03-31 02:49:29.810271 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-31 02:49:29.810298 | orchestrator | Tuesday 31 March 2026 02:49:26 +0000 (0:00:00.134) 0:00:20.125 ********* 2026-03-31 02:49:29.810306 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:29.810315 | orchestrator | 2026-03-31 02:49:29.810323 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-31 02:49:29.810332 | orchestrator | Tuesday 31 March 2026 02:49:26 +0000 (0:00:00.125) 0:00:20.251 ********* 2026-03-31 02:49:29.810340 | orchestrator | ok: [testbed-node-3] => { 2026-03-31 02:49:29.810348 | orchestrator |  "vgs_report": { 2026-03-31 02:49:29.810357 | orchestrator |  "vg": [] 2026-03-31 02:49:29.810365 | orchestrator |  } 2026-03-31 02:49:29.810374 | orchestrator | } 2026-03-31 02:49:29.810382 | orchestrator | 2026-03-31 02:49:29.810391 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-31 02:49:29.810399 | orchestrator | Tuesday 31 March 2026 02:49:26 +0000 (0:00:00.162) 0:00:20.414 ********* 2026-03-31 02:49:29.810407 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:29.810415 | orchestrator | 2026-03-31 02:49:29.810423 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-31 02:49:29.810430 | orchestrator | Tuesday 31 March 2026 02:49:26 +0000 (0:00:00.128) 0:00:20.542 ********* 2026-03-31 02:49:29.810437 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:29.810444 | orchestrator | 2026-03-31 02:49:29.810452 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-31 02:49:29.810459 | orchestrator | Tuesday 31 March 2026 02:49:26 +0000 (0:00:00.386) 0:00:20.929 ********* 2026-03-31 02:49:29.810466 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:29.810473 | orchestrator | 2026-03-31 02:49:29.810481 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-31 02:49:29.810488 | orchestrator | Tuesday 31 March 2026 02:49:26 +0000 (0:00:00.135) 0:00:21.064 ********* 2026-03-31 02:49:29.810495 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:29.810502 | orchestrator | 2026-03-31 02:49:29.810509 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-31 02:49:29.810517 | orchestrator | Tuesday 31 March 2026 02:49:27 +0000 (0:00:00.132) 0:00:21.197 ********* 2026-03-31 02:49:29.810524 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:29.810531 | orchestrator | 2026-03-31 02:49:29.810538 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-31 02:49:29.810547 | orchestrator | Tuesday 31 March 2026 02:49:27 +0000 (0:00:00.142) 0:00:21.340 ********* 2026-03-31 02:49:29.810559 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:29.810575 | orchestrator | 2026-03-31 02:49:29.810591 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-31 02:49:29.810603 | orchestrator | Tuesday 31 March 2026 02:49:27 +0000 (0:00:00.151) 0:00:21.491 ********* 2026-03-31 02:49:29.810615 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:29.810650 | orchestrator | 2026-03-31 02:49:29.810662 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-31 02:49:29.810675 | orchestrator | Tuesday 31 March 2026 02:49:27 +0000 (0:00:00.129) 0:00:21.620 ********* 2026-03-31 02:49:29.810705 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:29.810719 | orchestrator | 2026-03-31 02:49:29.810731 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-31 02:49:29.810744 | orchestrator | Tuesday 31 March 2026 02:49:27 +0000 (0:00:00.149) 0:00:21.770 ********* 2026-03-31 02:49:29.810756 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:29.810768 | orchestrator | 2026-03-31 02:49:29.810777 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-31 02:49:29.810784 | orchestrator | Tuesday 31 March 2026 02:49:27 +0000 (0:00:00.157) 0:00:21.927 ********* 2026-03-31 02:49:29.810791 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:29.810798 | orchestrator | 2026-03-31 02:49:29.810805 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-31 02:49:29.810812 | orchestrator | Tuesday 31 March 2026 02:49:27 +0000 (0:00:00.154) 0:00:22.082 ********* 2026-03-31 02:49:29.810828 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:29.810835 | orchestrator | 2026-03-31 02:49:29.810842 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-31 02:49:29.810850 | orchestrator | Tuesday 31 March 2026 02:49:28 +0000 (0:00:00.140) 0:00:22.222 ********* 2026-03-31 02:49:29.810863 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:29.810875 | orchestrator | 2026-03-31 02:49:29.810894 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-31 02:49:29.810906 | orchestrator | Tuesday 31 March 2026 02:49:28 +0000 (0:00:00.133) 0:00:22.356 ********* 2026-03-31 02:49:29.810919 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:29.810931 | orchestrator | 2026-03-31 02:49:29.810943 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-31 02:49:29.810955 | orchestrator | Tuesday 31 March 2026 02:49:28 +0000 (0:00:00.143) 0:00:22.499 ********* 2026-03-31 02:49:29.810968 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:29.810980 | orchestrator | 2026-03-31 02:49:29.810994 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-31 02:49:29.811002 | orchestrator | Tuesday 31 March 2026 02:49:28 +0000 (0:00:00.395) 0:00:22.895 ********* 2026-03-31 02:49:29.811010 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dad98f55-09f4-5a2b-a5c7-aafce2660c53', 'data_vg': 'ceph-dad98f55-09f4-5a2b-a5c7-aafce2660c53'})  2026-03-31 02:49:29.811020 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-67174221-9040-517a-ae84-daf8ebd704d7', 'data_vg': 'ceph-67174221-9040-517a-ae84-daf8ebd704d7'})  2026-03-31 02:49:29.811027 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:29.811034 | orchestrator | 2026-03-31 02:49:29.811041 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-31 02:49:29.811048 | orchestrator | Tuesday 31 March 2026 02:49:28 +0000 (0:00:00.166) 0:00:23.061 ********* 2026-03-31 02:49:29.811058 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dad98f55-09f4-5a2b-a5c7-aafce2660c53', 'data_vg': 'ceph-dad98f55-09f4-5a2b-a5c7-aafce2660c53'})  2026-03-31 02:49:29.811072 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-67174221-9040-517a-ae84-daf8ebd704d7', 'data_vg': 'ceph-67174221-9040-517a-ae84-daf8ebd704d7'})  2026-03-31 02:49:29.811089 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:29.811102 | orchestrator | 2026-03-31 02:49:29.811113 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-31 02:49:29.811124 | orchestrator | Tuesday 31 March 2026 02:49:29 +0000 (0:00:00.159) 0:00:23.221 ********* 2026-03-31 02:49:29.811135 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dad98f55-09f4-5a2b-a5c7-aafce2660c53', 'data_vg': 'ceph-dad98f55-09f4-5a2b-a5c7-aafce2660c53'})  2026-03-31 02:49:29.811148 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-67174221-9040-517a-ae84-daf8ebd704d7', 'data_vg': 'ceph-67174221-9040-517a-ae84-daf8ebd704d7'})  2026-03-31 02:49:29.811160 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:29.811172 | orchestrator | 2026-03-31 02:49:29.811185 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-31 02:49:29.811193 | orchestrator | Tuesday 31 March 2026 02:49:29 +0000 (0:00:00.159) 0:00:23.381 ********* 2026-03-31 02:49:29.811200 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dad98f55-09f4-5a2b-a5c7-aafce2660c53', 'data_vg': 'ceph-dad98f55-09f4-5a2b-a5c7-aafce2660c53'})  2026-03-31 02:49:29.811207 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-67174221-9040-517a-ae84-daf8ebd704d7', 'data_vg': 'ceph-67174221-9040-517a-ae84-daf8ebd704d7'})  2026-03-31 02:49:29.811215 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:29.811222 | orchestrator | 2026-03-31 02:49:29.811229 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-31 02:49:29.811236 | orchestrator | Tuesday 31 March 2026 02:49:29 +0000 (0:00:00.167) 0:00:23.548 ********* 2026-03-31 02:49:29.811251 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dad98f55-09f4-5a2b-a5c7-aafce2660c53', 'data_vg': 'ceph-dad98f55-09f4-5a2b-a5c7-aafce2660c53'})  2026-03-31 02:49:29.811258 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-67174221-9040-517a-ae84-daf8ebd704d7', 'data_vg': 'ceph-67174221-9040-517a-ae84-daf8ebd704d7'})  2026-03-31 02:49:29.811265 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:29.811272 | orchestrator | 2026-03-31 02:49:29.811280 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-31 02:49:29.811287 | orchestrator | Tuesday 31 March 2026 02:49:29 +0000 (0:00:00.171) 0:00:23.719 ********* 2026-03-31 02:49:29.811301 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dad98f55-09f4-5a2b-a5c7-aafce2660c53', 'data_vg': 'ceph-dad98f55-09f4-5a2b-a5c7-aafce2660c53'})  2026-03-31 02:49:35.505506 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-67174221-9040-517a-ae84-daf8ebd704d7', 'data_vg': 'ceph-67174221-9040-517a-ae84-daf8ebd704d7'})  2026-03-31 02:49:35.505742 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:35.505767 | orchestrator | 2026-03-31 02:49:35.505777 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-31 02:49:35.505785 | orchestrator | Tuesday 31 March 2026 02:49:29 +0000 (0:00:00.174) 0:00:23.894 ********* 2026-03-31 02:49:35.505792 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dad98f55-09f4-5a2b-a5c7-aafce2660c53', 'data_vg': 'ceph-dad98f55-09f4-5a2b-a5c7-aafce2660c53'})  2026-03-31 02:49:35.505800 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-67174221-9040-517a-ae84-daf8ebd704d7', 'data_vg': 'ceph-67174221-9040-517a-ae84-daf8ebd704d7'})  2026-03-31 02:49:35.505807 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:35.505814 | orchestrator | 2026-03-31 02:49:35.505835 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-31 02:49:35.505842 | orchestrator | Tuesday 31 March 2026 02:49:29 +0000 (0:00:00.164) 0:00:24.058 ********* 2026-03-31 02:49:35.505848 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dad98f55-09f4-5a2b-a5c7-aafce2660c53', 'data_vg': 'ceph-dad98f55-09f4-5a2b-a5c7-aafce2660c53'})  2026-03-31 02:49:35.505854 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-67174221-9040-517a-ae84-daf8ebd704d7', 'data_vg': 'ceph-67174221-9040-517a-ae84-daf8ebd704d7'})  2026-03-31 02:49:35.505860 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:35.505867 | orchestrator | 2026-03-31 02:49:35.505874 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-31 02:49:35.505880 | orchestrator | Tuesday 31 March 2026 02:49:30 +0000 (0:00:00.162) 0:00:24.221 ********* 2026-03-31 02:49:35.505886 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:49:35.505894 | orchestrator | 2026-03-31 02:49:35.505900 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-31 02:49:35.505907 | orchestrator | Tuesday 31 March 2026 02:49:30 +0000 (0:00:00.566) 0:00:24.787 ********* 2026-03-31 02:49:35.505913 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:49:35.505920 | orchestrator | 2026-03-31 02:49:35.505926 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-31 02:49:35.505933 | orchestrator | Tuesday 31 March 2026 02:49:31 +0000 (0:00:00.542) 0:00:25.330 ********* 2026-03-31 02:49:35.505939 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:49:35.505946 | orchestrator | 2026-03-31 02:49:35.505953 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-31 02:49:35.505961 | orchestrator | Tuesday 31 March 2026 02:49:31 +0000 (0:00:00.158) 0:00:25.488 ********* 2026-03-31 02:49:35.505968 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-67174221-9040-517a-ae84-daf8ebd704d7', 'vg_name': 'ceph-67174221-9040-517a-ae84-daf8ebd704d7'}) 2026-03-31 02:49:35.505976 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-dad98f55-09f4-5a2b-a5c7-aafce2660c53', 'vg_name': 'ceph-dad98f55-09f4-5a2b-a5c7-aafce2660c53'}) 2026-03-31 02:49:35.506000 | orchestrator | 2026-03-31 02:49:35.506007 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-31 02:49:35.506062 | orchestrator | Tuesday 31 March 2026 02:49:31 +0000 (0:00:00.177) 0:00:25.666 ********* 2026-03-31 02:49:35.506072 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dad98f55-09f4-5a2b-a5c7-aafce2660c53', 'data_vg': 'ceph-dad98f55-09f4-5a2b-a5c7-aafce2660c53'})  2026-03-31 02:49:35.506080 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-67174221-9040-517a-ae84-daf8ebd704d7', 'data_vg': 'ceph-67174221-9040-517a-ae84-daf8ebd704d7'})  2026-03-31 02:49:35.506088 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:35.506095 | orchestrator | 2026-03-31 02:49:35.506103 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-31 02:49:35.506111 | orchestrator | Tuesday 31 March 2026 02:49:32 +0000 (0:00:00.431) 0:00:26.097 ********* 2026-03-31 02:49:35.506120 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dad98f55-09f4-5a2b-a5c7-aafce2660c53', 'data_vg': 'ceph-dad98f55-09f4-5a2b-a5c7-aafce2660c53'})  2026-03-31 02:49:35.506128 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-67174221-9040-517a-ae84-daf8ebd704d7', 'data_vg': 'ceph-67174221-9040-517a-ae84-daf8ebd704d7'})  2026-03-31 02:49:35.506136 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:35.506143 | orchestrator | 2026-03-31 02:49:35.506150 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-31 02:49:35.506157 | orchestrator | Tuesday 31 March 2026 02:49:32 +0000 (0:00:00.163) 0:00:26.261 ********* 2026-03-31 02:49:35.506164 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dad98f55-09f4-5a2b-a5c7-aafce2660c53', 'data_vg': 'ceph-dad98f55-09f4-5a2b-a5c7-aafce2660c53'})  2026-03-31 02:49:35.506171 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-67174221-9040-517a-ae84-daf8ebd704d7', 'data_vg': 'ceph-67174221-9040-517a-ae84-daf8ebd704d7'})  2026-03-31 02:49:35.506178 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:49:35.506185 | orchestrator | 2026-03-31 02:49:35.506192 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-31 02:49:35.506199 | orchestrator | Tuesday 31 March 2026 02:49:32 +0000 (0:00:00.159) 0:00:26.421 ********* 2026-03-31 02:49:35.506227 | orchestrator | ok: [testbed-node-3] => { 2026-03-31 02:49:35.506236 | orchestrator |  "lvm_report": { 2026-03-31 02:49:35.506244 | orchestrator |  "lv": [ 2026-03-31 02:49:35.506252 | orchestrator |  { 2026-03-31 02:49:35.506260 | orchestrator |  "lv_name": "osd-block-67174221-9040-517a-ae84-daf8ebd704d7", 2026-03-31 02:49:35.506268 | orchestrator |  "vg_name": "ceph-67174221-9040-517a-ae84-daf8ebd704d7" 2026-03-31 02:49:35.506275 | orchestrator |  }, 2026-03-31 02:49:35.506282 | orchestrator |  { 2026-03-31 02:49:35.506290 | orchestrator |  "lv_name": "osd-block-dad98f55-09f4-5a2b-a5c7-aafce2660c53", 2026-03-31 02:49:35.506297 | orchestrator |  "vg_name": "ceph-dad98f55-09f4-5a2b-a5c7-aafce2660c53" 2026-03-31 02:49:35.506304 | orchestrator |  } 2026-03-31 02:49:35.506311 | orchestrator |  ], 2026-03-31 02:49:35.506318 | orchestrator |  "pv": [ 2026-03-31 02:49:35.506325 | orchestrator |  { 2026-03-31 02:49:35.506332 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-31 02:49:35.506339 | orchestrator |  "vg_name": "ceph-dad98f55-09f4-5a2b-a5c7-aafce2660c53" 2026-03-31 02:49:35.506345 | orchestrator |  }, 2026-03-31 02:49:35.506353 | orchestrator |  { 2026-03-31 02:49:35.506366 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-31 02:49:35.506374 | orchestrator |  "vg_name": "ceph-67174221-9040-517a-ae84-daf8ebd704d7" 2026-03-31 02:49:35.506381 | orchestrator |  } 2026-03-31 02:49:35.506388 | orchestrator |  ] 2026-03-31 02:49:35.506396 | orchestrator |  } 2026-03-31 02:49:35.506403 | orchestrator | } 2026-03-31 02:49:35.506420 | orchestrator | 2026-03-31 02:49:35.506428 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-31 02:49:35.506436 | orchestrator | 2026-03-31 02:49:35.506443 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-31 02:49:35.506451 | orchestrator | Tuesday 31 March 2026 02:49:32 +0000 (0:00:00.302) 0:00:26.723 ********* 2026-03-31 02:49:35.506457 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-31 02:49:35.506464 | orchestrator | 2026-03-31 02:49:35.506470 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-31 02:49:35.506477 | orchestrator | Tuesday 31 March 2026 02:49:32 +0000 (0:00:00.281) 0:00:27.005 ********* 2026-03-31 02:49:35.506483 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:49:35.506490 | orchestrator | 2026-03-31 02:49:35.506497 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:49:35.506504 | orchestrator | Tuesday 31 March 2026 02:49:33 +0000 (0:00:00.248) 0:00:27.254 ********* 2026-03-31 02:49:35.506510 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-31 02:49:35.506517 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-31 02:49:35.506523 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-31 02:49:35.506529 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-31 02:49:35.506535 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-31 02:49:35.506542 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-31 02:49:35.506549 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-31 02:49:35.506556 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-31 02:49:35.506563 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-31 02:49:35.506571 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-31 02:49:35.506577 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-31 02:49:35.506583 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-31 02:49:35.506590 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-31 02:49:35.506597 | orchestrator | 2026-03-31 02:49:35.506603 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:49:35.506610 | orchestrator | Tuesday 31 March 2026 02:49:33 +0000 (0:00:00.429) 0:00:27.684 ********* 2026-03-31 02:49:35.506616 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:35.506664 | orchestrator | 2026-03-31 02:49:35.506674 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:49:35.506681 | orchestrator | Tuesday 31 March 2026 02:49:33 +0000 (0:00:00.207) 0:00:27.891 ********* 2026-03-31 02:49:35.506687 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:35.506694 | orchestrator | 2026-03-31 02:49:35.506700 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:49:35.506707 | orchestrator | Tuesday 31 March 2026 02:49:34 +0000 (0:00:00.740) 0:00:28.631 ********* 2026-03-31 02:49:35.506715 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:35.506722 | orchestrator | 2026-03-31 02:49:35.506729 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:49:35.506735 | orchestrator | Tuesday 31 March 2026 02:49:34 +0000 (0:00:00.228) 0:00:28.860 ********* 2026-03-31 02:49:35.506742 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:35.506748 | orchestrator | 2026-03-31 02:49:35.506755 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:49:35.506762 | orchestrator | Tuesday 31 March 2026 02:49:34 +0000 (0:00:00.233) 0:00:29.093 ********* 2026-03-31 02:49:35.506777 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:35.506783 | orchestrator | 2026-03-31 02:49:35.506790 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:49:35.506797 | orchestrator | Tuesday 31 March 2026 02:49:35 +0000 (0:00:00.240) 0:00:29.334 ********* 2026-03-31 02:49:35.506804 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:35.506810 | orchestrator | 2026-03-31 02:49:35.506829 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:49:47.475907 | orchestrator | Tuesday 31 March 2026 02:49:35 +0000 (0:00:00.255) 0:00:29.589 ********* 2026-03-31 02:49:47.476009 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:47.476021 | orchestrator | 2026-03-31 02:49:47.476030 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:49:47.476038 | orchestrator | Tuesday 31 March 2026 02:49:35 +0000 (0:00:00.234) 0:00:29.824 ********* 2026-03-31 02:49:47.476045 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:47.476053 | orchestrator | 2026-03-31 02:49:47.476060 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:49:47.476067 | orchestrator | Tuesday 31 March 2026 02:49:35 +0000 (0:00:00.234) 0:00:30.059 ********* 2026-03-31 02:49:47.476075 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031) 2026-03-31 02:49:47.476083 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031) 2026-03-31 02:49:47.476090 | orchestrator | 2026-03-31 02:49:47.476111 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:49:47.476119 | orchestrator | Tuesday 31 March 2026 02:49:36 +0000 (0:00:00.482) 0:00:30.542 ********* 2026-03-31 02:49:47.476126 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_627ac388-afe2-405e-bfb6-93a96eeb5247) 2026-03-31 02:49:47.476134 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_627ac388-afe2-405e-bfb6-93a96eeb5247) 2026-03-31 02:49:47.476141 | orchestrator | 2026-03-31 02:49:47.476148 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:49:47.476155 | orchestrator | Tuesday 31 March 2026 02:49:36 +0000 (0:00:00.481) 0:00:31.023 ********* 2026-03-31 02:49:47.476162 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_aca90cda-810a-4a3a-a8a4-a9246b552814) 2026-03-31 02:49:47.476170 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_aca90cda-810a-4a3a-a8a4-a9246b552814) 2026-03-31 02:49:47.476177 | orchestrator | 2026-03-31 02:49:47.476184 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:49:47.476191 | orchestrator | Tuesday 31 March 2026 02:49:37 +0000 (0:00:00.785) 0:00:31.809 ********* 2026-03-31 02:49:47.476198 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5a64e844-a251-4ee7-a817-d55da64d6351) 2026-03-31 02:49:47.476205 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5a64e844-a251-4ee7-a817-d55da64d6351) 2026-03-31 02:49:47.476213 | orchestrator | 2026-03-31 02:49:47.476220 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:49:47.476227 | orchestrator | Tuesday 31 March 2026 02:49:38 +0000 (0:00:01.025) 0:00:32.834 ********* 2026-03-31 02:49:47.476234 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-31 02:49:47.476241 | orchestrator | 2026-03-31 02:49:47.476248 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:49:47.476255 | orchestrator | Tuesday 31 March 2026 02:49:39 +0000 (0:00:00.382) 0:00:33.216 ********* 2026-03-31 02:49:47.476262 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-31 02:49:47.476270 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-31 02:49:47.476277 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-31 02:49:47.476304 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-31 02:49:47.476312 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-31 02:49:47.476320 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-31 02:49:47.476327 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-31 02:49:47.476334 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-31 02:49:47.476341 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-31 02:49:47.476348 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-31 02:49:47.476355 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-31 02:49:47.476362 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-31 02:49:47.476369 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-31 02:49:47.476376 | orchestrator | 2026-03-31 02:49:47.476383 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:49:47.476391 | orchestrator | Tuesday 31 March 2026 02:49:39 +0000 (0:00:00.456) 0:00:33.673 ********* 2026-03-31 02:49:47.476398 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:47.476405 | orchestrator | 2026-03-31 02:49:47.476412 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:49:47.476419 | orchestrator | Tuesday 31 March 2026 02:49:39 +0000 (0:00:00.237) 0:00:33.910 ********* 2026-03-31 02:49:47.476426 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:47.476433 | orchestrator | 2026-03-31 02:49:47.476440 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:49:47.476447 | orchestrator | Tuesday 31 March 2026 02:49:40 +0000 (0:00:00.258) 0:00:34.168 ********* 2026-03-31 02:49:47.476456 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:47.476464 | orchestrator | 2026-03-31 02:49:47.476486 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:49:47.476495 | orchestrator | Tuesday 31 March 2026 02:49:40 +0000 (0:00:00.222) 0:00:34.390 ********* 2026-03-31 02:49:47.476503 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:47.476511 | orchestrator | 2026-03-31 02:49:47.476519 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:49:47.476527 | orchestrator | Tuesday 31 March 2026 02:49:40 +0000 (0:00:00.217) 0:00:34.608 ********* 2026-03-31 02:49:47.476535 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:47.476543 | orchestrator | 2026-03-31 02:49:47.476551 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:49:47.476560 | orchestrator | Tuesday 31 March 2026 02:49:40 +0000 (0:00:00.263) 0:00:34.871 ********* 2026-03-31 02:49:47.476568 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:47.476576 | orchestrator | 2026-03-31 02:49:47.476584 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:49:47.476592 | orchestrator | Tuesday 31 March 2026 02:49:40 +0000 (0:00:00.209) 0:00:35.081 ********* 2026-03-31 02:49:47.476606 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:47.476614 | orchestrator | 2026-03-31 02:49:47.476622 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:49:47.476631 | orchestrator | Tuesday 31 March 2026 02:49:41 +0000 (0:00:00.295) 0:00:35.376 ********* 2026-03-31 02:49:47.476663 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:47.476671 | orchestrator | 2026-03-31 02:49:47.476679 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:49:47.476687 | orchestrator | Tuesday 31 March 2026 02:49:42 +0000 (0:00:00.719) 0:00:36.095 ********* 2026-03-31 02:49:47.476696 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-31 02:49:47.476710 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-31 02:49:47.476719 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-31 02:49:47.476728 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-31 02:49:47.476736 | orchestrator | 2026-03-31 02:49:47.476744 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:49:47.476752 | orchestrator | Tuesday 31 March 2026 02:49:42 +0000 (0:00:00.727) 0:00:36.823 ********* 2026-03-31 02:49:47.476760 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:47.476768 | orchestrator | 2026-03-31 02:49:47.476776 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:49:47.476784 | orchestrator | Tuesday 31 March 2026 02:49:42 +0000 (0:00:00.261) 0:00:37.084 ********* 2026-03-31 02:49:47.476792 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:47.476800 | orchestrator | 2026-03-31 02:49:47.476808 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:49:47.476816 | orchestrator | Tuesday 31 March 2026 02:49:43 +0000 (0:00:00.235) 0:00:37.320 ********* 2026-03-31 02:49:47.476824 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:47.476833 | orchestrator | 2026-03-31 02:49:47.476841 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:49:47.476849 | orchestrator | Tuesday 31 March 2026 02:49:43 +0000 (0:00:00.226) 0:00:37.546 ********* 2026-03-31 02:49:47.476857 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:47.476864 | orchestrator | 2026-03-31 02:49:47.476871 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-31 02:49:47.476878 | orchestrator | Tuesday 31 March 2026 02:49:43 +0000 (0:00:00.221) 0:00:37.768 ********* 2026-03-31 02:49:47.476885 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:47.476892 | orchestrator | 2026-03-31 02:49:47.476899 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-31 02:49:47.476906 | orchestrator | Tuesday 31 March 2026 02:49:43 +0000 (0:00:00.152) 0:00:37.920 ********* 2026-03-31 02:49:47.476913 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb'}}) 2026-03-31 02:49:47.476921 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'da0b55d5-13d5-528b-aee2-5667f342587c'}}) 2026-03-31 02:49:47.476928 | orchestrator | 2026-03-31 02:49:47.476935 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-31 02:49:47.476942 | orchestrator | Tuesday 31 March 2026 02:49:44 +0000 (0:00:00.215) 0:00:38.135 ********* 2026-03-31 02:49:47.476951 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb', 'data_vg': 'ceph-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb'}) 2026-03-31 02:49:47.476959 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-da0b55d5-13d5-528b-aee2-5667f342587c', 'data_vg': 'ceph-da0b55d5-13d5-528b-aee2-5667f342587c'}) 2026-03-31 02:49:47.476967 | orchestrator | 2026-03-31 02:49:47.476974 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-31 02:49:47.476981 | orchestrator | Tuesday 31 March 2026 02:49:45 +0000 (0:00:01.932) 0:00:40.067 ********* 2026-03-31 02:49:47.476988 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb', 'data_vg': 'ceph-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb'})  2026-03-31 02:49:47.476996 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-da0b55d5-13d5-528b-aee2-5667f342587c', 'data_vg': 'ceph-da0b55d5-13d5-528b-aee2-5667f342587c'})  2026-03-31 02:49:47.477003 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:47.477010 | orchestrator | 2026-03-31 02:49:47.477018 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-31 02:49:47.477025 | orchestrator | Tuesday 31 March 2026 02:49:46 +0000 (0:00:00.162) 0:00:40.230 ********* 2026-03-31 02:49:47.477032 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb', 'data_vg': 'ceph-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb'}) 2026-03-31 02:49:47.477048 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-da0b55d5-13d5-528b-aee2-5667f342587c', 'data_vg': 'ceph-da0b55d5-13d5-528b-aee2-5667f342587c'}) 2026-03-31 02:49:53.793622 | orchestrator | 2026-03-31 02:49:53.793902 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-31 02:49:53.793933 | orchestrator | Tuesday 31 March 2026 02:49:47 +0000 (0:00:01.324) 0:00:41.554 ********* 2026-03-31 02:49:53.793953 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb', 'data_vg': 'ceph-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb'})  2026-03-31 02:49:53.793975 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-da0b55d5-13d5-528b-aee2-5667f342587c', 'data_vg': 'ceph-da0b55d5-13d5-528b-aee2-5667f342587c'})  2026-03-31 02:49:53.793993 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:53.794014 | orchestrator | 2026-03-31 02:49:53.794133 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-31 02:49:53.794153 | orchestrator | Tuesday 31 March 2026 02:49:47 +0000 (0:00:00.430) 0:00:41.984 ********* 2026-03-31 02:49:53.794171 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:53.794189 | orchestrator | 2026-03-31 02:49:53.794207 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-31 02:49:53.794225 | orchestrator | Tuesday 31 March 2026 02:49:48 +0000 (0:00:00.153) 0:00:42.138 ********* 2026-03-31 02:49:53.794243 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb', 'data_vg': 'ceph-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb'})  2026-03-31 02:49:53.794264 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-da0b55d5-13d5-528b-aee2-5667f342587c', 'data_vg': 'ceph-da0b55d5-13d5-528b-aee2-5667f342587c'})  2026-03-31 02:49:53.794282 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:53.794301 | orchestrator | 2026-03-31 02:49:53.794322 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-31 02:49:53.794341 | orchestrator | Tuesday 31 March 2026 02:49:48 +0000 (0:00:00.162) 0:00:42.301 ********* 2026-03-31 02:49:53.794359 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:53.794377 | orchestrator | 2026-03-31 02:49:53.794394 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-31 02:49:53.794413 | orchestrator | Tuesday 31 March 2026 02:49:48 +0000 (0:00:00.132) 0:00:42.433 ********* 2026-03-31 02:49:53.794430 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb', 'data_vg': 'ceph-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb'})  2026-03-31 02:49:53.794449 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-da0b55d5-13d5-528b-aee2-5667f342587c', 'data_vg': 'ceph-da0b55d5-13d5-528b-aee2-5667f342587c'})  2026-03-31 02:49:53.794468 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:53.794487 | orchestrator | 2026-03-31 02:49:53.794505 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-31 02:49:53.794523 | orchestrator | Tuesday 31 March 2026 02:49:48 +0000 (0:00:00.165) 0:00:42.599 ********* 2026-03-31 02:49:53.794540 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:53.794559 | orchestrator | 2026-03-31 02:49:53.794579 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-31 02:49:53.794597 | orchestrator | Tuesday 31 March 2026 02:49:48 +0000 (0:00:00.137) 0:00:42.736 ********* 2026-03-31 02:49:53.794616 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb', 'data_vg': 'ceph-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb'})  2026-03-31 02:49:53.794635 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-da0b55d5-13d5-528b-aee2-5667f342587c', 'data_vg': 'ceph-da0b55d5-13d5-528b-aee2-5667f342587c'})  2026-03-31 02:49:53.794679 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:53.794698 | orchestrator | 2026-03-31 02:49:53.794716 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-31 02:49:53.794760 | orchestrator | Tuesday 31 March 2026 02:49:48 +0000 (0:00:00.154) 0:00:42.891 ********* 2026-03-31 02:49:53.794772 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:49:53.794784 | orchestrator | 2026-03-31 02:49:53.794795 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-31 02:49:53.794806 | orchestrator | Tuesday 31 March 2026 02:49:48 +0000 (0:00:00.151) 0:00:43.042 ********* 2026-03-31 02:49:53.794816 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb', 'data_vg': 'ceph-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb'})  2026-03-31 02:49:53.794827 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-da0b55d5-13d5-528b-aee2-5667f342587c', 'data_vg': 'ceph-da0b55d5-13d5-528b-aee2-5667f342587c'})  2026-03-31 02:49:53.794838 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:53.794848 | orchestrator | 2026-03-31 02:49:53.794857 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-31 02:49:53.794867 | orchestrator | Tuesday 31 March 2026 02:49:49 +0000 (0:00:00.171) 0:00:43.213 ********* 2026-03-31 02:49:53.794876 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb', 'data_vg': 'ceph-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb'})  2026-03-31 02:49:53.794886 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-da0b55d5-13d5-528b-aee2-5667f342587c', 'data_vg': 'ceph-da0b55d5-13d5-528b-aee2-5667f342587c'})  2026-03-31 02:49:53.794895 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:53.794905 | orchestrator | 2026-03-31 02:49:53.794914 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-31 02:49:53.794945 | orchestrator | Tuesday 31 March 2026 02:49:49 +0000 (0:00:00.157) 0:00:43.371 ********* 2026-03-31 02:49:53.794956 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb', 'data_vg': 'ceph-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb'})  2026-03-31 02:49:53.794966 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-da0b55d5-13d5-528b-aee2-5667f342587c', 'data_vg': 'ceph-da0b55d5-13d5-528b-aee2-5667f342587c'})  2026-03-31 02:49:53.794975 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:53.794985 | orchestrator | 2026-03-31 02:49:53.794994 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-31 02:49:53.795004 | orchestrator | Tuesday 31 March 2026 02:49:49 +0000 (0:00:00.162) 0:00:43.533 ********* 2026-03-31 02:49:53.795021 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:53.795030 | orchestrator | 2026-03-31 02:49:53.795040 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-31 02:49:53.795049 | orchestrator | Tuesday 31 March 2026 02:49:49 +0000 (0:00:00.393) 0:00:43.927 ********* 2026-03-31 02:49:53.795059 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:53.795068 | orchestrator | 2026-03-31 02:49:53.795078 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-31 02:49:53.795087 | orchestrator | Tuesday 31 March 2026 02:49:49 +0000 (0:00:00.155) 0:00:44.082 ********* 2026-03-31 02:49:53.795096 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:53.795106 | orchestrator | 2026-03-31 02:49:53.795115 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-31 02:49:53.795125 | orchestrator | Tuesday 31 March 2026 02:49:50 +0000 (0:00:00.158) 0:00:44.240 ********* 2026-03-31 02:49:53.795134 | orchestrator | ok: [testbed-node-4] => { 2026-03-31 02:49:53.795144 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-31 02:49:53.795154 | orchestrator | } 2026-03-31 02:49:53.795163 | orchestrator | 2026-03-31 02:49:53.795173 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-31 02:49:53.795183 | orchestrator | Tuesday 31 March 2026 02:49:50 +0000 (0:00:00.166) 0:00:44.407 ********* 2026-03-31 02:49:53.795192 | orchestrator | ok: [testbed-node-4] => { 2026-03-31 02:49:53.795202 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-31 02:49:53.795219 | orchestrator | } 2026-03-31 02:49:53.795229 | orchestrator | 2026-03-31 02:49:53.795238 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-31 02:49:53.795248 | orchestrator | Tuesday 31 March 2026 02:49:50 +0000 (0:00:00.146) 0:00:44.553 ********* 2026-03-31 02:49:53.795257 | orchestrator | ok: [testbed-node-4] => { 2026-03-31 02:49:53.795266 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-31 02:49:53.795276 | orchestrator | } 2026-03-31 02:49:53.795286 | orchestrator | 2026-03-31 02:49:53.795295 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-31 02:49:53.795305 | orchestrator | Tuesday 31 March 2026 02:49:50 +0000 (0:00:00.161) 0:00:44.714 ********* 2026-03-31 02:49:53.795314 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:49:53.795323 | orchestrator | 2026-03-31 02:49:53.795333 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-31 02:49:53.795342 | orchestrator | Tuesday 31 March 2026 02:49:51 +0000 (0:00:00.559) 0:00:45.274 ********* 2026-03-31 02:49:53.795352 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:49:53.795361 | orchestrator | 2026-03-31 02:49:53.795371 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-31 02:49:53.795380 | orchestrator | Tuesday 31 March 2026 02:49:51 +0000 (0:00:00.666) 0:00:45.940 ********* 2026-03-31 02:49:53.795390 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:49:53.795399 | orchestrator | 2026-03-31 02:49:53.795408 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-31 02:49:53.795418 | orchestrator | Tuesday 31 March 2026 02:49:52 +0000 (0:00:00.551) 0:00:46.491 ********* 2026-03-31 02:49:53.795427 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:49:53.795437 | orchestrator | 2026-03-31 02:49:53.795446 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-31 02:49:53.795455 | orchestrator | Tuesday 31 March 2026 02:49:52 +0000 (0:00:00.144) 0:00:46.636 ********* 2026-03-31 02:49:53.795465 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:53.795474 | orchestrator | 2026-03-31 02:49:53.795484 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-31 02:49:53.795493 | orchestrator | Tuesday 31 March 2026 02:49:52 +0000 (0:00:00.133) 0:00:46.770 ********* 2026-03-31 02:49:53.795503 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:53.795512 | orchestrator | 2026-03-31 02:49:53.795522 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-31 02:49:53.795532 | orchestrator | Tuesday 31 March 2026 02:49:53 +0000 (0:00:00.371) 0:00:47.142 ********* 2026-03-31 02:49:53.795541 | orchestrator | ok: [testbed-node-4] => { 2026-03-31 02:49:53.795551 | orchestrator |  "vgs_report": { 2026-03-31 02:49:53.795561 | orchestrator |  "vg": [] 2026-03-31 02:49:53.795570 | orchestrator |  } 2026-03-31 02:49:53.795580 | orchestrator | } 2026-03-31 02:49:53.795589 | orchestrator | 2026-03-31 02:49:53.795599 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-31 02:49:53.795608 | orchestrator | Tuesday 31 March 2026 02:49:53 +0000 (0:00:00.149) 0:00:47.291 ********* 2026-03-31 02:49:53.795618 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:53.795627 | orchestrator | 2026-03-31 02:49:53.795654 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-31 02:49:53.795664 | orchestrator | Tuesday 31 March 2026 02:49:53 +0000 (0:00:00.138) 0:00:47.430 ********* 2026-03-31 02:49:53.795673 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:53.795683 | orchestrator | 2026-03-31 02:49:53.795692 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-31 02:49:53.795702 | orchestrator | Tuesday 31 March 2026 02:49:53 +0000 (0:00:00.141) 0:00:47.572 ********* 2026-03-31 02:49:53.795711 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:53.795720 | orchestrator | 2026-03-31 02:49:53.795730 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-31 02:49:53.795739 | orchestrator | Tuesday 31 March 2026 02:49:53 +0000 (0:00:00.162) 0:00:47.735 ********* 2026-03-31 02:49:53.795755 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:53.795765 | orchestrator | 2026-03-31 02:49:53.795780 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-31 02:49:58.968221 | orchestrator | Tuesday 31 March 2026 02:49:53 +0000 (0:00:00.142) 0:00:47.877 ********* 2026-03-31 02:49:58.968331 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:58.968347 | orchestrator | 2026-03-31 02:49:58.968360 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-31 02:49:58.968371 | orchestrator | Tuesday 31 March 2026 02:49:53 +0000 (0:00:00.156) 0:00:48.033 ********* 2026-03-31 02:49:58.968382 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:58.968393 | orchestrator | 2026-03-31 02:49:58.968404 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-31 02:49:58.968415 | orchestrator | Tuesday 31 March 2026 02:49:54 +0000 (0:00:00.158) 0:00:48.192 ********* 2026-03-31 02:49:58.968426 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:58.968437 | orchestrator | 2026-03-31 02:49:58.968465 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-31 02:49:58.968476 | orchestrator | Tuesday 31 March 2026 02:49:54 +0000 (0:00:00.148) 0:00:48.340 ********* 2026-03-31 02:49:58.968487 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:58.968498 | orchestrator | 2026-03-31 02:49:58.968508 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-31 02:49:58.968519 | orchestrator | Tuesday 31 March 2026 02:49:54 +0000 (0:00:00.144) 0:00:48.485 ********* 2026-03-31 02:49:58.968530 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:58.968541 | orchestrator | 2026-03-31 02:49:58.968551 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-31 02:49:58.968562 | orchestrator | Tuesday 31 March 2026 02:49:54 +0000 (0:00:00.144) 0:00:48.629 ********* 2026-03-31 02:49:58.968573 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:58.968584 | orchestrator | 2026-03-31 02:49:58.968594 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-31 02:49:58.968606 | orchestrator | Tuesday 31 March 2026 02:49:54 +0000 (0:00:00.374) 0:00:49.003 ********* 2026-03-31 02:49:58.968617 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:58.968627 | orchestrator | 2026-03-31 02:49:58.968638 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-31 02:49:58.968737 | orchestrator | Tuesday 31 March 2026 02:49:55 +0000 (0:00:00.171) 0:00:49.175 ********* 2026-03-31 02:49:58.968750 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:58.968763 | orchestrator | 2026-03-31 02:49:58.968776 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-31 02:49:58.968789 | orchestrator | Tuesday 31 March 2026 02:49:55 +0000 (0:00:00.192) 0:00:49.368 ********* 2026-03-31 02:49:58.968801 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:58.968813 | orchestrator | 2026-03-31 02:49:58.968825 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-31 02:49:58.968837 | orchestrator | Tuesday 31 March 2026 02:49:55 +0000 (0:00:00.159) 0:00:49.527 ********* 2026-03-31 02:49:58.968849 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:58.968862 | orchestrator | 2026-03-31 02:49:58.968874 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-31 02:49:58.968886 | orchestrator | Tuesday 31 March 2026 02:49:55 +0000 (0:00:00.145) 0:00:49.673 ********* 2026-03-31 02:49:58.968901 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb', 'data_vg': 'ceph-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb'})  2026-03-31 02:49:58.968915 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-da0b55d5-13d5-528b-aee2-5667f342587c', 'data_vg': 'ceph-da0b55d5-13d5-528b-aee2-5667f342587c'})  2026-03-31 02:49:58.968927 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:58.968939 | orchestrator | 2026-03-31 02:49:58.968952 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-31 02:49:58.968986 | orchestrator | Tuesday 31 March 2026 02:49:55 +0000 (0:00:00.164) 0:00:49.837 ********* 2026-03-31 02:49:58.968999 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb', 'data_vg': 'ceph-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb'})  2026-03-31 02:49:58.969010 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-da0b55d5-13d5-528b-aee2-5667f342587c', 'data_vg': 'ceph-da0b55d5-13d5-528b-aee2-5667f342587c'})  2026-03-31 02:49:58.969021 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:58.969032 | orchestrator | 2026-03-31 02:49:58.969043 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-31 02:49:58.969054 | orchestrator | Tuesday 31 March 2026 02:49:55 +0000 (0:00:00.169) 0:00:50.007 ********* 2026-03-31 02:49:58.969065 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb', 'data_vg': 'ceph-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb'})  2026-03-31 02:49:58.969075 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-da0b55d5-13d5-528b-aee2-5667f342587c', 'data_vg': 'ceph-da0b55d5-13d5-528b-aee2-5667f342587c'})  2026-03-31 02:49:58.969086 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:58.969098 | orchestrator | 2026-03-31 02:49:58.969108 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-31 02:49:58.969119 | orchestrator | Tuesday 31 March 2026 02:49:56 +0000 (0:00:00.157) 0:00:50.164 ********* 2026-03-31 02:49:58.969130 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb', 'data_vg': 'ceph-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb'})  2026-03-31 02:49:58.969141 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-da0b55d5-13d5-528b-aee2-5667f342587c', 'data_vg': 'ceph-da0b55d5-13d5-528b-aee2-5667f342587c'})  2026-03-31 02:49:58.969152 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:58.969163 | orchestrator | 2026-03-31 02:49:58.969193 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-31 02:49:58.969204 | orchestrator | Tuesday 31 March 2026 02:49:56 +0000 (0:00:00.180) 0:00:50.344 ********* 2026-03-31 02:49:58.969215 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb', 'data_vg': 'ceph-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb'})  2026-03-31 02:49:58.969226 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-da0b55d5-13d5-528b-aee2-5667f342587c', 'data_vg': 'ceph-da0b55d5-13d5-528b-aee2-5667f342587c'})  2026-03-31 02:49:58.969237 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:58.969248 | orchestrator | 2026-03-31 02:49:58.969265 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-31 02:49:58.969276 | orchestrator | Tuesday 31 March 2026 02:49:56 +0000 (0:00:00.168) 0:00:50.512 ********* 2026-03-31 02:49:58.969287 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb', 'data_vg': 'ceph-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb'})  2026-03-31 02:49:58.969298 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-da0b55d5-13d5-528b-aee2-5667f342587c', 'data_vg': 'ceph-da0b55d5-13d5-528b-aee2-5667f342587c'})  2026-03-31 02:49:58.969309 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:58.969319 | orchestrator | 2026-03-31 02:49:58.969330 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-31 02:49:58.969341 | orchestrator | Tuesday 31 March 2026 02:49:56 +0000 (0:00:00.185) 0:00:50.698 ********* 2026-03-31 02:49:58.969352 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb', 'data_vg': 'ceph-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb'})  2026-03-31 02:49:58.969363 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-da0b55d5-13d5-528b-aee2-5667f342587c', 'data_vg': 'ceph-da0b55d5-13d5-528b-aee2-5667f342587c'})  2026-03-31 02:49:58.969373 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:58.969391 | orchestrator | 2026-03-31 02:49:58.969402 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-31 02:49:58.969413 | orchestrator | Tuesday 31 March 2026 02:49:57 +0000 (0:00:00.434) 0:00:51.133 ********* 2026-03-31 02:49:58.969424 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb', 'data_vg': 'ceph-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb'})  2026-03-31 02:49:58.969435 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-da0b55d5-13d5-528b-aee2-5667f342587c', 'data_vg': 'ceph-da0b55d5-13d5-528b-aee2-5667f342587c'})  2026-03-31 02:49:58.969445 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:58.969456 | orchestrator | 2026-03-31 02:49:58.969467 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-31 02:49:58.969478 | orchestrator | Tuesday 31 March 2026 02:49:57 +0000 (0:00:00.178) 0:00:51.312 ********* 2026-03-31 02:49:58.969489 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:49:58.969499 | orchestrator | 2026-03-31 02:49:58.969510 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-31 02:49:58.969521 | orchestrator | Tuesday 31 March 2026 02:49:57 +0000 (0:00:00.564) 0:00:51.876 ********* 2026-03-31 02:49:58.969531 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:49:58.969542 | orchestrator | 2026-03-31 02:49:58.969553 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-31 02:49:58.969563 | orchestrator | Tuesday 31 March 2026 02:49:58 +0000 (0:00:00.508) 0:00:52.385 ********* 2026-03-31 02:49:58.969574 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:49:58.969585 | orchestrator | 2026-03-31 02:49:58.969596 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-31 02:49:58.969606 | orchestrator | Tuesday 31 March 2026 02:49:58 +0000 (0:00:00.161) 0:00:52.546 ********* 2026-03-31 02:49:58.969617 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-da0b55d5-13d5-528b-aee2-5667f342587c', 'vg_name': 'ceph-da0b55d5-13d5-528b-aee2-5667f342587c'}) 2026-03-31 02:49:58.969629 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb', 'vg_name': 'ceph-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb'}) 2026-03-31 02:49:58.969640 | orchestrator | 2026-03-31 02:49:58.969670 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-31 02:49:58.969681 | orchestrator | Tuesday 31 March 2026 02:49:58 +0000 (0:00:00.183) 0:00:52.730 ********* 2026-03-31 02:49:58.969692 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb', 'data_vg': 'ceph-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb'})  2026-03-31 02:49:58.969703 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-da0b55d5-13d5-528b-aee2-5667f342587c', 'data_vg': 'ceph-da0b55d5-13d5-528b-aee2-5667f342587c'})  2026-03-31 02:49:58.969714 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:49:58.969725 | orchestrator | 2026-03-31 02:49:58.969736 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-31 02:49:58.969746 | orchestrator | Tuesday 31 March 2026 02:49:58 +0000 (0:00:00.165) 0:00:52.895 ********* 2026-03-31 02:49:58.969757 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb', 'data_vg': 'ceph-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb'})  2026-03-31 02:49:58.969775 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-da0b55d5-13d5-528b-aee2-5667f342587c', 'data_vg': 'ceph-da0b55d5-13d5-528b-aee2-5667f342587c'})  2026-03-31 02:50:06.036020 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:50:06.036148 | orchestrator | 2026-03-31 02:50:06.036174 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-31 02:50:06.036210 | orchestrator | Tuesday 31 March 2026 02:49:58 +0000 (0:00:00.157) 0:00:53.052 ********* 2026-03-31 02:50:06.036234 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb', 'data_vg': 'ceph-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb'})  2026-03-31 02:50:06.036285 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-da0b55d5-13d5-528b-aee2-5667f342587c', 'data_vg': 'ceph-da0b55d5-13d5-528b-aee2-5667f342587c'})  2026-03-31 02:50:06.036297 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:50:06.036308 | orchestrator | 2026-03-31 02:50:06.036319 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-31 02:50:06.036330 | orchestrator | Tuesday 31 March 2026 02:49:59 +0000 (0:00:00.166) 0:00:53.219 ********* 2026-03-31 02:50:06.036341 | orchestrator | ok: [testbed-node-4] => { 2026-03-31 02:50:06.036352 | orchestrator |  "lvm_report": { 2026-03-31 02:50:06.036363 | orchestrator |  "lv": [ 2026-03-31 02:50:06.036374 | orchestrator |  { 2026-03-31 02:50:06.036385 | orchestrator |  "lv_name": "osd-block-da0b55d5-13d5-528b-aee2-5667f342587c", 2026-03-31 02:50:06.036396 | orchestrator |  "vg_name": "ceph-da0b55d5-13d5-528b-aee2-5667f342587c" 2026-03-31 02:50:06.036407 | orchestrator |  }, 2026-03-31 02:50:06.036418 | orchestrator |  { 2026-03-31 02:50:06.036429 | orchestrator |  "lv_name": "osd-block-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb", 2026-03-31 02:50:06.036439 | orchestrator |  "vg_name": "ceph-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb" 2026-03-31 02:50:06.036450 | orchestrator |  } 2026-03-31 02:50:06.036461 | orchestrator |  ], 2026-03-31 02:50:06.036471 | orchestrator |  "pv": [ 2026-03-31 02:50:06.036482 | orchestrator |  { 2026-03-31 02:50:06.036492 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-31 02:50:06.036503 | orchestrator |  "vg_name": "ceph-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb" 2026-03-31 02:50:06.036515 | orchestrator |  }, 2026-03-31 02:50:06.036526 | orchestrator |  { 2026-03-31 02:50:06.036536 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-31 02:50:06.036547 | orchestrator |  "vg_name": "ceph-da0b55d5-13d5-528b-aee2-5667f342587c" 2026-03-31 02:50:06.036558 | orchestrator |  } 2026-03-31 02:50:06.036568 | orchestrator |  ] 2026-03-31 02:50:06.036579 | orchestrator |  } 2026-03-31 02:50:06.036590 | orchestrator | } 2026-03-31 02:50:06.036601 | orchestrator | 2026-03-31 02:50:06.036612 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-31 02:50:06.036622 | orchestrator | 2026-03-31 02:50:06.036633 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-31 02:50:06.036644 | orchestrator | Tuesday 31 March 2026 02:49:59 +0000 (0:00:00.317) 0:00:53.536 ********* 2026-03-31 02:50:06.036685 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-31 02:50:06.036697 | orchestrator | 2026-03-31 02:50:06.036708 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-31 02:50:06.036719 | orchestrator | Tuesday 31 March 2026 02:50:00 +0000 (0:00:00.773) 0:00:54.310 ********* 2026-03-31 02:50:06.036730 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:50:06.036741 | orchestrator | 2026-03-31 02:50:06.036751 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:50:06.036762 | orchestrator | Tuesday 31 March 2026 02:50:00 +0000 (0:00:00.251) 0:00:54.562 ********* 2026-03-31 02:50:06.036773 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-31 02:50:06.036784 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-31 02:50:06.036794 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-31 02:50:06.036805 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-31 02:50:06.036816 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-31 02:50:06.036826 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-31 02:50:06.036837 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-31 02:50:06.036855 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-31 02:50:06.036866 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-31 02:50:06.036876 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-31 02:50:06.036887 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-31 02:50:06.036898 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-31 02:50:06.036908 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-31 02:50:06.036919 | orchestrator | 2026-03-31 02:50:06.036930 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:50:06.036940 | orchestrator | Tuesday 31 March 2026 02:50:00 +0000 (0:00:00.441) 0:00:55.004 ********* 2026-03-31 02:50:06.036951 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:06.036962 | orchestrator | 2026-03-31 02:50:06.036972 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:50:06.036983 | orchestrator | Tuesday 31 March 2026 02:50:01 +0000 (0:00:00.219) 0:00:55.224 ********* 2026-03-31 02:50:06.036994 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:06.037005 | orchestrator | 2026-03-31 02:50:06.037016 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:50:06.037044 | orchestrator | Tuesday 31 March 2026 02:50:01 +0000 (0:00:00.216) 0:00:55.440 ********* 2026-03-31 02:50:06.037056 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:06.037067 | orchestrator | 2026-03-31 02:50:06.037078 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:50:06.037088 | orchestrator | Tuesday 31 March 2026 02:50:01 +0000 (0:00:00.213) 0:00:55.654 ********* 2026-03-31 02:50:06.037099 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:06.037110 | orchestrator | 2026-03-31 02:50:06.037121 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:50:06.037132 | orchestrator | Tuesday 31 March 2026 02:50:01 +0000 (0:00:00.229) 0:00:55.884 ********* 2026-03-31 02:50:06.037143 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:06.037154 | orchestrator | 2026-03-31 02:50:06.037164 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:50:06.037175 | orchestrator | Tuesday 31 March 2026 02:50:02 +0000 (0:00:00.212) 0:00:56.096 ********* 2026-03-31 02:50:06.037194 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:06.037212 | orchestrator | 2026-03-31 02:50:06.037232 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:50:06.037245 | orchestrator | Tuesday 31 March 2026 02:50:02 +0000 (0:00:00.216) 0:00:56.312 ********* 2026-03-31 02:50:06.037256 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:06.037266 | orchestrator | 2026-03-31 02:50:06.037277 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:50:06.037288 | orchestrator | Tuesday 31 March 2026 02:50:02 +0000 (0:00:00.230) 0:00:56.542 ********* 2026-03-31 02:50:06.037298 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:06.037309 | orchestrator | 2026-03-31 02:50:06.037320 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:50:06.037331 | orchestrator | Tuesday 31 March 2026 02:50:03 +0000 (0:00:00.718) 0:00:57.261 ********* 2026-03-31 02:50:06.037341 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126) 2026-03-31 02:50:06.037354 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126) 2026-03-31 02:50:06.037365 | orchestrator | 2026-03-31 02:50:06.037375 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:50:06.037386 | orchestrator | Tuesday 31 March 2026 02:50:03 +0000 (0:00:00.483) 0:00:57.745 ********* 2026-03-31 02:50:06.037479 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_cee620fc-9fd6-4c5e-b237-9b955e0088ae) 2026-03-31 02:50:06.037506 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_cee620fc-9fd6-4c5e-b237-9b955e0088ae) 2026-03-31 02:50:06.037517 | orchestrator | 2026-03-31 02:50:06.037529 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:50:06.037540 | orchestrator | Tuesday 31 March 2026 02:50:04 +0000 (0:00:00.464) 0:00:58.210 ********* 2026-03-31 02:50:06.037551 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0036be6c-41d0-4a1c-804a-c8bed222bda7) 2026-03-31 02:50:06.037562 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0036be6c-41d0-4a1c-804a-c8bed222bda7) 2026-03-31 02:50:06.037573 | orchestrator | 2026-03-31 02:50:06.037584 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:50:06.037595 | orchestrator | Tuesday 31 March 2026 02:50:04 +0000 (0:00:00.516) 0:00:58.726 ********* 2026-03-31 02:50:06.037606 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d1382055-b12a-4a0d-90b0-6b0bf5b2002d) 2026-03-31 02:50:06.037617 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d1382055-b12a-4a0d-90b0-6b0bf5b2002d) 2026-03-31 02:50:06.037629 | orchestrator | 2026-03-31 02:50:06.037640 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-31 02:50:06.037677 | orchestrator | Tuesday 31 March 2026 02:50:05 +0000 (0:00:00.494) 0:00:59.220 ********* 2026-03-31 02:50:06.037690 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-31 02:50:06.037701 | orchestrator | 2026-03-31 02:50:06.037711 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:50:06.037722 | orchestrator | Tuesday 31 March 2026 02:50:05 +0000 (0:00:00.371) 0:00:59.591 ********* 2026-03-31 02:50:06.037733 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-31 02:50:06.037744 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-31 02:50:06.037755 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-31 02:50:06.037766 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-31 02:50:06.037777 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-31 02:50:06.037787 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-31 02:50:06.037798 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-31 02:50:06.037809 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-31 02:50:06.037820 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-31 02:50:06.037831 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-31 02:50:06.037842 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-31 02:50:06.037863 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-31 02:50:15.473999 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-31 02:50:15.474172 | orchestrator | 2026-03-31 02:50:15.474189 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:50:15.474201 | orchestrator | Tuesday 31 March 2026 02:50:06 +0000 (0:00:00.519) 0:01:00.111 ********* 2026-03-31 02:50:15.474213 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:15.474224 | orchestrator | 2026-03-31 02:50:15.474236 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:50:15.474261 | orchestrator | Tuesday 31 March 2026 02:50:06 +0000 (0:00:00.220) 0:01:00.332 ********* 2026-03-31 02:50:15.474272 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:15.474304 | orchestrator | 2026-03-31 02:50:15.474315 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:50:15.474326 | orchestrator | Tuesday 31 March 2026 02:50:06 +0000 (0:00:00.222) 0:01:00.555 ********* 2026-03-31 02:50:15.474337 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:15.474348 | orchestrator | 2026-03-31 02:50:15.474359 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:50:15.474370 | orchestrator | Tuesday 31 March 2026 02:50:06 +0000 (0:00:00.241) 0:01:00.796 ********* 2026-03-31 02:50:15.474380 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:15.474391 | orchestrator | 2026-03-31 02:50:15.474402 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:50:15.474413 | orchestrator | Tuesday 31 March 2026 02:50:06 +0000 (0:00:00.224) 0:01:01.021 ********* 2026-03-31 02:50:15.474424 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:15.474434 | orchestrator | 2026-03-31 02:50:15.474445 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:50:15.474456 | orchestrator | Tuesday 31 March 2026 02:50:07 +0000 (0:00:00.752) 0:01:01.774 ********* 2026-03-31 02:50:15.474467 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:15.474477 | orchestrator | 2026-03-31 02:50:15.474488 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:50:15.474499 | orchestrator | Tuesday 31 March 2026 02:50:07 +0000 (0:00:00.222) 0:01:01.996 ********* 2026-03-31 02:50:15.474510 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:15.474521 | orchestrator | 2026-03-31 02:50:15.474533 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:50:15.474546 | orchestrator | Tuesday 31 March 2026 02:50:08 +0000 (0:00:00.225) 0:01:02.222 ********* 2026-03-31 02:50:15.474559 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:15.474572 | orchestrator | 2026-03-31 02:50:15.474584 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:50:15.474597 | orchestrator | Tuesday 31 March 2026 02:50:08 +0000 (0:00:00.217) 0:01:02.439 ********* 2026-03-31 02:50:15.474609 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-31 02:50:15.474622 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-31 02:50:15.474634 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-31 02:50:15.474647 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-31 02:50:15.474691 | orchestrator | 2026-03-31 02:50:15.474709 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:50:15.474721 | orchestrator | Tuesday 31 March 2026 02:50:09 +0000 (0:00:00.693) 0:01:03.132 ********* 2026-03-31 02:50:15.474734 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:15.474746 | orchestrator | 2026-03-31 02:50:15.474758 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:50:15.474770 | orchestrator | Tuesday 31 March 2026 02:50:09 +0000 (0:00:00.241) 0:01:03.374 ********* 2026-03-31 02:50:15.474782 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:15.474793 | orchestrator | 2026-03-31 02:50:15.474806 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:50:15.474818 | orchestrator | Tuesday 31 March 2026 02:50:09 +0000 (0:00:00.239) 0:01:03.613 ********* 2026-03-31 02:50:15.474831 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:15.474843 | orchestrator | 2026-03-31 02:50:15.474855 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-31 02:50:15.474868 | orchestrator | Tuesday 31 March 2026 02:50:09 +0000 (0:00:00.231) 0:01:03.845 ********* 2026-03-31 02:50:15.474880 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:15.474892 | orchestrator | 2026-03-31 02:50:15.474903 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-31 02:50:15.474913 | orchestrator | Tuesday 31 March 2026 02:50:09 +0000 (0:00:00.221) 0:01:04.066 ********* 2026-03-31 02:50:15.474924 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:15.474935 | orchestrator | 2026-03-31 02:50:15.474954 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-31 02:50:15.474965 | orchestrator | Tuesday 31 March 2026 02:50:10 +0000 (0:00:00.147) 0:01:04.214 ********* 2026-03-31 02:50:15.474977 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '07ced279-a583-5107-8220-95f80fc10ac7'}}) 2026-03-31 02:50:15.474988 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '185c377e-da3e-5428-98db-747be321d2f9'}}) 2026-03-31 02:50:15.474999 | orchestrator | 2026-03-31 02:50:15.475010 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-31 02:50:15.475020 | orchestrator | Tuesday 31 March 2026 02:50:10 +0000 (0:00:00.208) 0:01:04.422 ********* 2026-03-31 02:50:15.475032 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-07ced279-a583-5107-8220-95f80fc10ac7', 'data_vg': 'ceph-07ced279-a583-5107-8220-95f80fc10ac7'}) 2026-03-31 02:50:15.475045 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-185c377e-da3e-5428-98db-747be321d2f9', 'data_vg': 'ceph-185c377e-da3e-5428-98db-747be321d2f9'}) 2026-03-31 02:50:15.475055 | orchestrator | 2026-03-31 02:50:15.475066 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-31 02:50:15.475096 | orchestrator | Tuesday 31 March 2026 02:50:12 +0000 (0:00:01.878) 0:01:06.301 ********* 2026-03-31 02:50:15.475108 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07ced279-a583-5107-8220-95f80fc10ac7', 'data_vg': 'ceph-07ced279-a583-5107-8220-95f80fc10ac7'})  2026-03-31 02:50:15.475120 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185c377e-da3e-5428-98db-747be321d2f9', 'data_vg': 'ceph-185c377e-da3e-5428-98db-747be321d2f9'})  2026-03-31 02:50:15.475131 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:15.475142 | orchestrator | 2026-03-31 02:50:15.475158 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-31 02:50:15.475169 | orchestrator | Tuesday 31 March 2026 02:50:12 +0000 (0:00:00.425) 0:01:06.727 ********* 2026-03-31 02:50:15.475180 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-07ced279-a583-5107-8220-95f80fc10ac7', 'data_vg': 'ceph-07ced279-a583-5107-8220-95f80fc10ac7'}) 2026-03-31 02:50:15.475191 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-185c377e-da3e-5428-98db-747be321d2f9', 'data_vg': 'ceph-185c377e-da3e-5428-98db-747be321d2f9'}) 2026-03-31 02:50:15.475202 | orchestrator | 2026-03-31 02:50:15.475212 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-31 02:50:15.475223 | orchestrator | Tuesday 31 March 2026 02:50:14 +0000 (0:00:01.370) 0:01:08.098 ********* 2026-03-31 02:50:15.475234 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07ced279-a583-5107-8220-95f80fc10ac7', 'data_vg': 'ceph-07ced279-a583-5107-8220-95f80fc10ac7'})  2026-03-31 02:50:15.475245 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185c377e-da3e-5428-98db-747be321d2f9', 'data_vg': 'ceph-185c377e-da3e-5428-98db-747be321d2f9'})  2026-03-31 02:50:15.475256 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:15.475266 | orchestrator | 2026-03-31 02:50:15.475277 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-31 02:50:15.475287 | orchestrator | Tuesday 31 March 2026 02:50:14 +0000 (0:00:00.175) 0:01:08.273 ********* 2026-03-31 02:50:15.475298 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:15.475309 | orchestrator | 2026-03-31 02:50:15.475320 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-31 02:50:15.475330 | orchestrator | Tuesday 31 March 2026 02:50:14 +0000 (0:00:00.177) 0:01:08.451 ********* 2026-03-31 02:50:15.475341 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07ced279-a583-5107-8220-95f80fc10ac7', 'data_vg': 'ceph-07ced279-a583-5107-8220-95f80fc10ac7'})  2026-03-31 02:50:15.475352 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185c377e-da3e-5428-98db-747be321d2f9', 'data_vg': 'ceph-185c377e-da3e-5428-98db-747be321d2f9'})  2026-03-31 02:50:15.475369 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:15.475380 | orchestrator | 2026-03-31 02:50:15.475390 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-31 02:50:15.475401 | orchestrator | Tuesday 31 March 2026 02:50:14 +0000 (0:00:00.171) 0:01:08.623 ********* 2026-03-31 02:50:15.475412 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:15.475423 | orchestrator | 2026-03-31 02:50:15.475433 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-31 02:50:15.475444 | orchestrator | Tuesday 31 March 2026 02:50:14 +0000 (0:00:00.155) 0:01:08.778 ********* 2026-03-31 02:50:15.475455 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07ced279-a583-5107-8220-95f80fc10ac7', 'data_vg': 'ceph-07ced279-a583-5107-8220-95f80fc10ac7'})  2026-03-31 02:50:15.475466 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185c377e-da3e-5428-98db-747be321d2f9', 'data_vg': 'ceph-185c377e-da3e-5428-98db-747be321d2f9'})  2026-03-31 02:50:15.475476 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:15.475487 | orchestrator | 2026-03-31 02:50:15.475498 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-31 02:50:15.475509 | orchestrator | Tuesday 31 March 2026 02:50:14 +0000 (0:00:00.171) 0:01:08.949 ********* 2026-03-31 02:50:15.475519 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:15.475530 | orchestrator | 2026-03-31 02:50:15.475541 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-31 02:50:15.475551 | orchestrator | Tuesday 31 March 2026 02:50:15 +0000 (0:00:00.150) 0:01:09.100 ********* 2026-03-31 02:50:15.475562 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07ced279-a583-5107-8220-95f80fc10ac7', 'data_vg': 'ceph-07ced279-a583-5107-8220-95f80fc10ac7'})  2026-03-31 02:50:15.475573 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185c377e-da3e-5428-98db-747be321d2f9', 'data_vg': 'ceph-185c377e-da3e-5428-98db-747be321d2f9'})  2026-03-31 02:50:15.475584 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:15.475595 | orchestrator | 2026-03-31 02:50:15.475606 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-31 02:50:15.475616 | orchestrator | Tuesday 31 March 2026 02:50:15 +0000 (0:00:00.151) 0:01:09.251 ********* 2026-03-31 02:50:15.475627 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:50:15.475638 | orchestrator | 2026-03-31 02:50:15.475648 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-31 02:50:15.475699 | orchestrator | Tuesday 31 March 2026 02:50:15 +0000 (0:00:00.147) 0:01:09.399 ********* 2026-03-31 02:50:15.475718 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07ced279-a583-5107-8220-95f80fc10ac7', 'data_vg': 'ceph-07ced279-a583-5107-8220-95f80fc10ac7'})  2026-03-31 02:50:22.214134 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185c377e-da3e-5428-98db-747be321d2f9', 'data_vg': 'ceph-185c377e-da3e-5428-98db-747be321d2f9'})  2026-03-31 02:50:22.214216 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:22.214226 | orchestrator | 2026-03-31 02:50:22.214233 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-31 02:50:22.214241 | orchestrator | Tuesday 31 March 2026 02:50:15 +0000 (0:00:00.158) 0:01:09.558 ********* 2026-03-31 02:50:22.214260 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07ced279-a583-5107-8220-95f80fc10ac7', 'data_vg': 'ceph-07ced279-a583-5107-8220-95f80fc10ac7'})  2026-03-31 02:50:22.214267 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185c377e-da3e-5428-98db-747be321d2f9', 'data_vg': 'ceph-185c377e-da3e-5428-98db-747be321d2f9'})  2026-03-31 02:50:22.214273 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:22.214279 | orchestrator | 2026-03-31 02:50:22.214285 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-31 02:50:22.214291 | orchestrator | Tuesday 31 March 2026 02:50:15 +0000 (0:00:00.157) 0:01:09.715 ********* 2026-03-31 02:50:22.214312 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07ced279-a583-5107-8220-95f80fc10ac7', 'data_vg': 'ceph-07ced279-a583-5107-8220-95f80fc10ac7'})  2026-03-31 02:50:22.214318 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185c377e-da3e-5428-98db-747be321d2f9', 'data_vg': 'ceph-185c377e-da3e-5428-98db-747be321d2f9'})  2026-03-31 02:50:22.214324 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:22.214330 | orchestrator | 2026-03-31 02:50:22.214336 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-31 02:50:22.214342 | orchestrator | Tuesday 31 March 2026 02:50:16 +0000 (0:00:00.399) 0:01:10.114 ********* 2026-03-31 02:50:22.214347 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:22.214353 | orchestrator | 2026-03-31 02:50:22.214359 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-31 02:50:22.214365 | orchestrator | Tuesday 31 March 2026 02:50:16 +0000 (0:00:00.134) 0:01:10.249 ********* 2026-03-31 02:50:22.214370 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:22.214377 | orchestrator | 2026-03-31 02:50:22.214383 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-31 02:50:22.214388 | orchestrator | Tuesday 31 March 2026 02:50:16 +0000 (0:00:00.162) 0:01:10.412 ********* 2026-03-31 02:50:22.214394 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:22.214400 | orchestrator | 2026-03-31 02:50:22.214406 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-31 02:50:22.214411 | orchestrator | Tuesday 31 March 2026 02:50:16 +0000 (0:00:00.148) 0:01:10.560 ********* 2026-03-31 02:50:22.214417 | orchestrator | ok: [testbed-node-5] => { 2026-03-31 02:50:22.214424 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-31 02:50:22.214430 | orchestrator | } 2026-03-31 02:50:22.214436 | orchestrator | 2026-03-31 02:50:22.214441 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-31 02:50:22.214447 | orchestrator | Tuesday 31 March 2026 02:50:16 +0000 (0:00:00.169) 0:01:10.729 ********* 2026-03-31 02:50:22.214453 | orchestrator | ok: [testbed-node-5] => { 2026-03-31 02:50:22.214459 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-31 02:50:22.214464 | orchestrator | } 2026-03-31 02:50:22.214470 | orchestrator | 2026-03-31 02:50:22.214476 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-31 02:50:22.214481 | orchestrator | Tuesday 31 March 2026 02:50:16 +0000 (0:00:00.149) 0:01:10.879 ********* 2026-03-31 02:50:22.214487 | orchestrator | ok: [testbed-node-5] => { 2026-03-31 02:50:22.214493 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-31 02:50:22.214499 | orchestrator | } 2026-03-31 02:50:22.214505 | orchestrator | 2026-03-31 02:50:22.214510 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-31 02:50:22.214516 | orchestrator | Tuesday 31 March 2026 02:50:16 +0000 (0:00:00.152) 0:01:11.032 ********* 2026-03-31 02:50:22.214522 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:50:22.214527 | orchestrator | 2026-03-31 02:50:22.214533 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-31 02:50:22.214539 | orchestrator | Tuesday 31 March 2026 02:50:17 +0000 (0:00:00.541) 0:01:11.574 ********* 2026-03-31 02:50:22.214545 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:50:22.214550 | orchestrator | 2026-03-31 02:50:22.214556 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-31 02:50:22.214562 | orchestrator | Tuesday 31 March 2026 02:50:18 +0000 (0:00:00.572) 0:01:12.147 ********* 2026-03-31 02:50:22.214567 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:50:22.214573 | orchestrator | 2026-03-31 02:50:22.214578 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-31 02:50:22.214584 | orchestrator | Tuesday 31 March 2026 02:50:18 +0000 (0:00:00.562) 0:01:12.709 ********* 2026-03-31 02:50:22.214590 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:50:22.214596 | orchestrator | 2026-03-31 02:50:22.214601 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-31 02:50:22.214612 | orchestrator | Tuesday 31 March 2026 02:50:18 +0000 (0:00:00.152) 0:01:12.861 ********* 2026-03-31 02:50:22.214618 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:22.214623 | orchestrator | 2026-03-31 02:50:22.214629 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-31 02:50:22.214635 | orchestrator | Tuesday 31 March 2026 02:50:18 +0000 (0:00:00.118) 0:01:12.979 ********* 2026-03-31 02:50:22.214641 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:22.214647 | orchestrator | 2026-03-31 02:50:22.214653 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-31 02:50:22.214702 | orchestrator | Tuesday 31 March 2026 02:50:19 +0000 (0:00:00.364) 0:01:13.344 ********* 2026-03-31 02:50:22.214710 | orchestrator | ok: [testbed-node-5] => { 2026-03-31 02:50:22.214717 | orchestrator |  "vgs_report": { 2026-03-31 02:50:22.214724 | orchestrator |  "vg": [] 2026-03-31 02:50:22.214744 | orchestrator |  } 2026-03-31 02:50:22.214752 | orchestrator | } 2026-03-31 02:50:22.214759 | orchestrator | 2026-03-31 02:50:22.214765 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-31 02:50:22.214772 | orchestrator | Tuesday 31 March 2026 02:50:19 +0000 (0:00:00.159) 0:01:13.504 ********* 2026-03-31 02:50:22.214779 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:22.214785 | orchestrator | 2026-03-31 02:50:22.214792 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-31 02:50:22.214799 | orchestrator | Tuesday 31 March 2026 02:50:19 +0000 (0:00:00.158) 0:01:13.662 ********* 2026-03-31 02:50:22.214810 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:22.214816 | orchestrator | 2026-03-31 02:50:22.214823 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-31 02:50:22.214830 | orchestrator | Tuesday 31 March 2026 02:50:19 +0000 (0:00:00.159) 0:01:13.822 ********* 2026-03-31 02:50:22.214836 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:22.214842 | orchestrator | 2026-03-31 02:50:22.214849 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-31 02:50:22.214856 | orchestrator | Tuesday 31 March 2026 02:50:19 +0000 (0:00:00.145) 0:01:13.968 ********* 2026-03-31 02:50:22.214862 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:22.214869 | orchestrator | 2026-03-31 02:50:22.214875 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-31 02:50:22.214882 | orchestrator | Tuesday 31 March 2026 02:50:20 +0000 (0:00:00.144) 0:01:14.112 ********* 2026-03-31 02:50:22.214888 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:22.214895 | orchestrator | 2026-03-31 02:50:22.214901 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-31 02:50:22.214908 | orchestrator | Tuesday 31 March 2026 02:50:20 +0000 (0:00:00.141) 0:01:14.254 ********* 2026-03-31 02:50:22.214914 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:22.214921 | orchestrator | 2026-03-31 02:50:22.214928 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-31 02:50:22.214934 | orchestrator | Tuesday 31 March 2026 02:50:20 +0000 (0:00:00.136) 0:01:14.391 ********* 2026-03-31 02:50:22.214941 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:22.214948 | orchestrator | 2026-03-31 02:50:22.214955 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-31 02:50:22.214961 | orchestrator | Tuesday 31 March 2026 02:50:20 +0000 (0:00:00.145) 0:01:14.536 ********* 2026-03-31 02:50:22.214968 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:22.214975 | orchestrator | 2026-03-31 02:50:22.214981 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-31 02:50:22.214988 | orchestrator | Tuesday 31 March 2026 02:50:20 +0000 (0:00:00.147) 0:01:14.683 ********* 2026-03-31 02:50:22.214994 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:22.215000 | orchestrator | 2026-03-31 02:50:22.215007 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-31 02:50:22.215018 | orchestrator | Tuesday 31 March 2026 02:50:20 +0000 (0:00:00.144) 0:01:14.828 ********* 2026-03-31 02:50:22.215035 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:22.215046 | orchestrator | 2026-03-31 02:50:22.215056 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-31 02:50:22.215066 | orchestrator | Tuesday 31 March 2026 02:50:20 +0000 (0:00:00.146) 0:01:14.974 ********* 2026-03-31 02:50:22.215076 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:22.215087 | orchestrator | 2026-03-31 02:50:22.215097 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-31 02:50:22.215108 | orchestrator | Tuesday 31 March 2026 02:50:21 +0000 (0:00:00.364) 0:01:15.338 ********* 2026-03-31 02:50:22.215118 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:22.215129 | orchestrator | 2026-03-31 02:50:22.215139 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-31 02:50:22.215148 | orchestrator | Tuesday 31 March 2026 02:50:21 +0000 (0:00:00.148) 0:01:15.487 ********* 2026-03-31 02:50:22.215158 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:22.215168 | orchestrator | 2026-03-31 02:50:22.215178 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-31 02:50:22.215186 | orchestrator | Tuesday 31 March 2026 02:50:21 +0000 (0:00:00.140) 0:01:15.627 ********* 2026-03-31 02:50:22.215195 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:22.215206 | orchestrator | 2026-03-31 02:50:22.215215 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-31 02:50:22.215224 | orchestrator | Tuesday 31 March 2026 02:50:21 +0000 (0:00:00.149) 0:01:15.777 ********* 2026-03-31 02:50:22.215233 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07ced279-a583-5107-8220-95f80fc10ac7', 'data_vg': 'ceph-07ced279-a583-5107-8220-95f80fc10ac7'})  2026-03-31 02:50:22.215242 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185c377e-da3e-5428-98db-747be321d2f9', 'data_vg': 'ceph-185c377e-da3e-5428-98db-747be321d2f9'})  2026-03-31 02:50:22.215252 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:22.215261 | orchestrator | 2026-03-31 02:50:22.215271 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-31 02:50:22.215281 | orchestrator | Tuesday 31 March 2026 02:50:21 +0000 (0:00:00.189) 0:01:15.967 ********* 2026-03-31 02:50:22.215291 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07ced279-a583-5107-8220-95f80fc10ac7', 'data_vg': 'ceph-07ced279-a583-5107-8220-95f80fc10ac7'})  2026-03-31 02:50:22.215301 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185c377e-da3e-5428-98db-747be321d2f9', 'data_vg': 'ceph-185c377e-da3e-5428-98db-747be321d2f9'})  2026-03-31 02:50:22.215311 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:22.215321 | orchestrator | 2026-03-31 02:50:22.215332 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-31 02:50:22.215343 | orchestrator | Tuesday 31 March 2026 02:50:22 +0000 (0:00:00.182) 0:01:16.149 ********* 2026-03-31 02:50:22.215361 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07ced279-a583-5107-8220-95f80fc10ac7', 'data_vg': 'ceph-07ced279-a583-5107-8220-95f80fc10ac7'})  2026-03-31 02:50:25.494549 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185c377e-da3e-5428-98db-747be321d2f9', 'data_vg': 'ceph-185c377e-da3e-5428-98db-747be321d2f9'})  2026-03-31 02:50:25.494642 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:25.494655 | orchestrator | 2026-03-31 02:50:25.494712 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-31 02:50:25.494725 | orchestrator | Tuesday 31 March 2026 02:50:22 +0000 (0:00:00.151) 0:01:16.300 ********* 2026-03-31 02:50:25.494735 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07ced279-a583-5107-8220-95f80fc10ac7', 'data_vg': 'ceph-07ced279-a583-5107-8220-95f80fc10ac7'})  2026-03-31 02:50:25.494744 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185c377e-da3e-5428-98db-747be321d2f9', 'data_vg': 'ceph-185c377e-da3e-5428-98db-747be321d2f9'})  2026-03-31 02:50:25.494789 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:25.494798 | orchestrator | 2026-03-31 02:50:25.494807 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-31 02:50:25.494816 | orchestrator | Tuesday 31 March 2026 02:50:22 +0000 (0:00:00.168) 0:01:16.468 ********* 2026-03-31 02:50:25.494825 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07ced279-a583-5107-8220-95f80fc10ac7', 'data_vg': 'ceph-07ced279-a583-5107-8220-95f80fc10ac7'})  2026-03-31 02:50:25.494834 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185c377e-da3e-5428-98db-747be321d2f9', 'data_vg': 'ceph-185c377e-da3e-5428-98db-747be321d2f9'})  2026-03-31 02:50:25.494843 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:25.494851 | orchestrator | 2026-03-31 02:50:25.494860 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-31 02:50:25.494868 | orchestrator | Tuesday 31 March 2026 02:50:22 +0000 (0:00:00.162) 0:01:16.631 ********* 2026-03-31 02:50:25.494877 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07ced279-a583-5107-8220-95f80fc10ac7', 'data_vg': 'ceph-07ced279-a583-5107-8220-95f80fc10ac7'})  2026-03-31 02:50:25.494886 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185c377e-da3e-5428-98db-747be321d2f9', 'data_vg': 'ceph-185c377e-da3e-5428-98db-747be321d2f9'})  2026-03-31 02:50:25.494894 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:25.494903 | orchestrator | 2026-03-31 02:50:25.494911 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-31 02:50:25.494920 | orchestrator | Tuesday 31 March 2026 02:50:22 +0000 (0:00:00.175) 0:01:16.806 ********* 2026-03-31 02:50:25.494929 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07ced279-a583-5107-8220-95f80fc10ac7', 'data_vg': 'ceph-07ced279-a583-5107-8220-95f80fc10ac7'})  2026-03-31 02:50:25.494937 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185c377e-da3e-5428-98db-747be321d2f9', 'data_vg': 'ceph-185c377e-da3e-5428-98db-747be321d2f9'})  2026-03-31 02:50:25.494946 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:25.494954 | orchestrator | 2026-03-31 02:50:25.494963 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-31 02:50:25.494972 | orchestrator | Tuesday 31 March 2026 02:50:22 +0000 (0:00:00.163) 0:01:16.969 ********* 2026-03-31 02:50:25.494980 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07ced279-a583-5107-8220-95f80fc10ac7', 'data_vg': 'ceph-07ced279-a583-5107-8220-95f80fc10ac7'})  2026-03-31 02:50:25.494989 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185c377e-da3e-5428-98db-747be321d2f9', 'data_vg': 'ceph-185c377e-da3e-5428-98db-747be321d2f9'})  2026-03-31 02:50:25.494997 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:25.495006 | orchestrator | 2026-03-31 02:50:25.495015 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-31 02:50:25.495023 | orchestrator | Tuesday 31 March 2026 02:50:23 +0000 (0:00:00.166) 0:01:17.136 ********* 2026-03-31 02:50:25.495032 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:50:25.495041 | orchestrator | 2026-03-31 02:50:25.495050 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-31 02:50:25.495059 | orchestrator | Tuesday 31 March 2026 02:50:23 +0000 (0:00:00.826) 0:01:17.962 ********* 2026-03-31 02:50:25.495067 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:50:25.495076 | orchestrator | 2026-03-31 02:50:25.495086 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-31 02:50:25.495096 | orchestrator | Tuesday 31 March 2026 02:50:24 +0000 (0:00:00.533) 0:01:18.495 ********* 2026-03-31 02:50:25.495106 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:50:25.495116 | orchestrator | 2026-03-31 02:50:25.495126 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-31 02:50:25.495136 | orchestrator | Tuesday 31 March 2026 02:50:24 +0000 (0:00:00.179) 0:01:18.675 ********* 2026-03-31 02:50:25.495152 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-07ced279-a583-5107-8220-95f80fc10ac7', 'vg_name': 'ceph-07ced279-a583-5107-8220-95f80fc10ac7'}) 2026-03-31 02:50:25.495164 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-185c377e-da3e-5428-98db-747be321d2f9', 'vg_name': 'ceph-185c377e-da3e-5428-98db-747be321d2f9'}) 2026-03-31 02:50:25.495174 | orchestrator | 2026-03-31 02:50:25.495184 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-31 02:50:25.495194 | orchestrator | Tuesday 31 March 2026 02:50:24 +0000 (0:00:00.185) 0:01:18.860 ********* 2026-03-31 02:50:25.495222 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07ced279-a583-5107-8220-95f80fc10ac7', 'data_vg': 'ceph-07ced279-a583-5107-8220-95f80fc10ac7'})  2026-03-31 02:50:25.495237 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185c377e-da3e-5428-98db-747be321d2f9', 'data_vg': 'ceph-185c377e-da3e-5428-98db-747be321d2f9'})  2026-03-31 02:50:25.495248 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:25.495258 | orchestrator | 2026-03-31 02:50:25.495267 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-31 02:50:25.495277 | orchestrator | Tuesday 31 March 2026 02:50:24 +0000 (0:00:00.186) 0:01:19.047 ********* 2026-03-31 02:50:25.495287 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07ced279-a583-5107-8220-95f80fc10ac7', 'data_vg': 'ceph-07ced279-a583-5107-8220-95f80fc10ac7'})  2026-03-31 02:50:25.495297 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185c377e-da3e-5428-98db-747be321d2f9', 'data_vg': 'ceph-185c377e-da3e-5428-98db-747be321d2f9'})  2026-03-31 02:50:25.495307 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:25.495317 | orchestrator | 2026-03-31 02:50:25.495326 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-31 02:50:25.495336 | orchestrator | Tuesday 31 March 2026 02:50:25 +0000 (0:00:00.171) 0:01:19.218 ********* 2026-03-31 02:50:25.495346 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07ced279-a583-5107-8220-95f80fc10ac7', 'data_vg': 'ceph-07ced279-a583-5107-8220-95f80fc10ac7'})  2026-03-31 02:50:25.495356 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185c377e-da3e-5428-98db-747be321d2f9', 'data_vg': 'ceph-185c377e-da3e-5428-98db-747be321d2f9'})  2026-03-31 02:50:25.495366 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:25.495376 | orchestrator | 2026-03-31 02:50:25.495386 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-31 02:50:25.495396 | orchestrator | Tuesday 31 March 2026 02:50:25 +0000 (0:00:00.177) 0:01:19.395 ********* 2026-03-31 02:50:25.495404 | orchestrator | ok: [testbed-node-5] => { 2026-03-31 02:50:25.495413 | orchestrator |  "lvm_report": { 2026-03-31 02:50:25.495422 | orchestrator |  "lv": [ 2026-03-31 02:50:25.495430 | orchestrator |  { 2026-03-31 02:50:25.495439 | orchestrator |  "lv_name": "osd-block-07ced279-a583-5107-8220-95f80fc10ac7", 2026-03-31 02:50:25.495448 | orchestrator |  "vg_name": "ceph-07ced279-a583-5107-8220-95f80fc10ac7" 2026-03-31 02:50:25.495457 | orchestrator |  }, 2026-03-31 02:50:25.495465 | orchestrator |  { 2026-03-31 02:50:25.495474 | orchestrator |  "lv_name": "osd-block-185c377e-da3e-5428-98db-747be321d2f9", 2026-03-31 02:50:25.495482 | orchestrator |  "vg_name": "ceph-185c377e-da3e-5428-98db-747be321d2f9" 2026-03-31 02:50:25.495491 | orchestrator |  } 2026-03-31 02:50:25.495499 | orchestrator |  ], 2026-03-31 02:50:25.495508 | orchestrator |  "pv": [ 2026-03-31 02:50:25.495516 | orchestrator |  { 2026-03-31 02:50:25.495525 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-31 02:50:25.495533 | orchestrator |  "vg_name": "ceph-07ced279-a583-5107-8220-95f80fc10ac7" 2026-03-31 02:50:25.495542 | orchestrator |  }, 2026-03-31 02:50:25.495550 | orchestrator |  { 2026-03-31 02:50:25.495559 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-31 02:50:25.495577 | orchestrator |  "vg_name": "ceph-185c377e-da3e-5428-98db-747be321d2f9" 2026-03-31 02:50:25.495586 | orchestrator |  } 2026-03-31 02:50:25.495594 | orchestrator |  ] 2026-03-31 02:50:25.495603 | orchestrator |  } 2026-03-31 02:50:25.495611 | orchestrator | } 2026-03-31 02:50:25.495620 | orchestrator | 2026-03-31 02:50:25.495629 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 02:50:25.495637 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-31 02:50:25.495646 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-31 02:50:25.495655 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-31 02:50:25.495727 | orchestrator | 2026-03-31 02:50:25.495744 | orchestrator | 2026-03-31 02:50:25.495758 | orchestrator | 2026-03-31 02:50:25.495772 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 02:50:25.495785 | orchestrator | Tuesday 31 March 2026 02:50:25 +0000 (0:00:00.159) 0:01:19.555 ********* 2026-03-31 02:50:25.495797 | orchestrator | =============================================================================== 2026-03-31 02:50:25.495810 | orchestrator | Create block VGs -------------------------------------------------------- 5.91s 2026-03-31 02:50:25.495824 | orchestrator | Create block LVs -------------------------------------------------------- 4.27s 2026-03-31 02:50:25.495837 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.96s 2026-03-31 02:50:25.495850 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.80s 2026-03-31 02:50:25.495863 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.77s 2026-03-31 02:50:25.495878 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.64s 2026-03-31 02:50:25.495891 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.58s 2026-03-31 02:50:25.495905 | orchestrator | Add known links to the list of available block devices ------------------ 1.42s 2026-03-31 02:50:25.495930 | orchestrator | Add known partitions to the list of available block devices ------------- 1.40s 2026-03-31 02:50:25.936926 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.32s 2026-03-31 02:50:25.937036 | orchestrator | Add known links to the list of available block devices ------------------ 1.03s 2026-03-31 02:50:25.937051 | orchestrator | Add known links to the list of available block devices ------------------ 0.97s 2026-03-31 02:50:25.937085 | orchestrator | Calculate VG sizes (with buffer) ---------------------------------------- 0.86s 2026-03-31 02:50:25.937096 | orchestrator | Add known links to the list of available block devices ------------------ 0.79s 2026-03-31 02:50:25.937107 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.78s 2026-03-31 02:50:25.937118 | orchestrator | Print LVM report data --------------------------------------------------- 0.78s 2026-03-31 02:50:25.937128 | orchestrator | Print 'Create block LVs' ------------------------------------------------ 0.77s 2026-03-31 02:50:25.937139 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.76s 2026-03-31 02:50:25.937155 | orchestrator | Add known partitions to the list of available block devices ------------- 0.75s 2026-03-31 02:50:25.937175 | orchestrator | Print 'Create block VGs' ------------------------------------------------ 0.75s 2026-03-31 02:50:38.443006 | orchestrator | 2026-03-31 02:50:38 | INFO  | Task 40ca61fb-7d24-4754-8c9a-cdf5c8769ea7 (facts) was prepared for execution. 2026-03-31 02:50:38.443114 | orchestrator | 2026-03-31 02:50:38 | INFO  | It takes a moment until task 40ca61fb-7d24-4754-8c9a-cdf5c8769ea7 (facts) has been started and output is visible here. 2026-03-31 02:50:52.003986 | orchestrator | 2026-03-31 02:50:52.004091 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-31 02:50:52.004124 | orchestrator | 2026-03-31 02:50:52.004132 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-31 02:50:52.004140 | orchestrator | Tuesday 31 March 2026 02:50:42 +0000 (0:00:00.302) 0:00:00.302 ********* 2026-03-31 02:50:52.004146 | orchestrator | ok: [testbed-manager] 2026-03-31 02:50:52.004155 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:50:52.004162 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:50:52.004169 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:50:52.004175 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:50:52.004182 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:50:52.004188 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:50:52.004196 | orchestrator | 2026-03-31 02:50:52.004207 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-31 02:50:52.004218 | orchestrator | Tuesday 31 March 2026 02:50:44 +0000 (0:00:01.185) 0:00:01.487 ********* 2026-03-31 02:50:52.004225 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:50:52.004232 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:50:52.004239 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:50:52.004245 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:50:52.004252 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:50:52.004258 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:50:52.004265 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:52.004272 | orchestrator | 2026-03-31 02:50:52.004278 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-31 02:50:52.004285 | orchestrator | 2026-03-31 02:50:52.004292 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-31 02:50:52.004298 | orchestrator | Tuesday 31 March 2026 02:50:45 +0000 (0:00:01.385) 0:00:02.872 ********* 2026-03-31 02:50:52.004305 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:50:52.004311 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:50:52.004318 | orchestrator | ok: [testbed-manager] 2026-03-31 02:50:52.004324 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:50:52.004331 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:50:52.004337 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:50:52.004344 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:50:52.004350 | orchestrator | 2026-03-31 02:50:52.004360 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-31 02:50:52.004371 | orchestrator | 2026-03-31 02:50:52.004382 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-31 02:50:52.004393 | orchestrator | Tuesday 31 March 2026 02:50:50 +0000 (0:00:05.433) 0:00:08.306 ********* 2026-03-31 02:50:52.004404 | orchestrator | skipping: [testbed-manager] 2026-03-31 02:50:52.004414 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:50:52.004424 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:50:52.004435 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:50:52.004446 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:50:52.004455 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:50:52.004465 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:50:52.004476 | orchestrator | 2026-03-31 02:50:52.004487 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 02:50:52.004499 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 02:50:52.004511 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 02:50:52.004524 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 02:50:52.004536 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 02:50:52.004548 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 02:50:52.004567 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 02:50:52.004577 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 02:50:52.004589 | orchestrator | 2026-03-31 02:50:52.004599 | orchestrator | 2026-03-31 02:50:52.004610 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 02:50:52.004637 | orchestrator | Tuesday 31 March 2026 02:50:51 +0000 (0:00:00.590) 0:00:08.896 ********* 2026-03-31 02:50:52.004648 | orchestrator | =============================================================================== 2026-03-31 02:50:52.004659 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.43s 2026-03-31 02:50:52.004671 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.39s 2026-03-31 02:50:52.004681 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.19s 2026-03-31 02:50:52.004737 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.59s 2026-03-31 02:50:54.583130 | orchestrator | 2026-03-31 02:50:54 | INFO  | Task 263fdf8e-ab70-49dd-9803-5106e0ef19e2 (ceph) was prepared for execution. 2026-03-31 02:50:54.583229 | orchestrator | 2026-03-31 02:50:54 | INFO  | It takes a moment until task 263fdf8e-ab70-49dd-9803-5106e0ef19e2 (ceph) has been started and output is visible here. 2026-03-31 02:51:13.614390 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-31 02:51:13.614503 | orchestrator | 2.16.14 2026-03-31 02:51:13.614519 | orchestrator | 2026-03-31 02:51:13.614531 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-03-31 02:51:13.614542 | orchestrator | 2026-03-31 02:51:13.614552 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-31 02:51:13.614562 | orchestrator | Tuesday 31 March 2026 02:50:59 +0000 (0:00:00.857) 0:00:00.857 ********* 2026-03-31 02:51:13.614573 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:51:13.614584 | orchestrator | 2026-03-31 02:51:13.614594 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-31 02:51:13.614604 | orchestrator | Tuesday 31 March 2026 02:51:01 +0000 (0:00:01.297) 0:00:02.155 ********* 2026-03-31 02:51:13.614613 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:51:13.614623 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:51:13.614633 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:51:13.614643 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:51:13.614652 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:51:13.614661 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:51:13.614672 | orchestrator | 2026-03-31 02:51:13.614682 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-31 02:51:13.614691 | orchestrator | Tuesday 31 March 2026 02:51:02 +0000 (0:00:01.298) 0:00:03.454 ********* 2026-03-31 02:51:13.614760 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:51:13.614772 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:51:13.614781 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:51:13.614791 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:51:13.614801 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:51:13.614810 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:51:13.614820 | orchestrator | 2026-03-31 02:51:13.614830 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-31 02:51:13.614839 | orchestrator | Tuesday 31 March 2026 02:51:03 +0000 (0:00:00.855) 0:00:04.309 ********* 2026-03-31 02:51:13.614849 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:51:13.614859 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:51:13.614868 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:51:13.614878 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:51:13.614914 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:51:13.614925 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:51:13.614936 | orchestrator | 2026-03-31 02:51:13.614947 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-31 02:51:13.614957 | orchestrator | Tuesday 31 March 2026 02:51:04 +0000 (0:00:01.011) 0:00:05.321 ********* 2026-03-31 02:51:13.614968 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:51:13.614979 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:51:13.614989 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:51:13.615000 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:51:13.615011 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:51:13.615022 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:51:13.615033 | orchestrator | 2026-03-31 02:51:13.615044 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-31 02:51:13.615055 | orchestrator | Tuesday 31 March 2026 02:51:05 +0000 (0:00:00.862) 0:00:06.183 ********* 2026-03-31 02:51:13.615066 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:51:13.615077 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:51:13.615087 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:51:13.615098 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:51:13.615108 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:51:13.615119 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:51:13.615129 | orchestrator | 2026-03-31 02:51:13.615140 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-31 02:51:13.615151 | orchestrator | Tuesday 31 March 2026 02:51:06 +0000 (0:00:00.708) 0:00:06.892 ********* 2026-03-31 02:51:13.615162 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:51:13.615173 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:51:13.615185 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:51:13.615201 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:51:13.615219 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:51:13.615236 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:51:13.615253 | orchestrator | 2026-03-31 02:51:13.615270 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-31 02:51:13.615289 | orchestrator | Tuesday 31 March 2026 02:51:06 +0000 (0:00:00.869) 0:00:07.762 ********* 2026-03-31 02:51:13.615306 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:51:13.615324 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:51:13.615335 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:51:13.615345 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:51:13.615355 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:51:13.615364 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:51:13.615374 | orchestrator | 2026-03-31 02:51:13.615384 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-31 02:51:13.615393 | orchestrator | Tuesday 31 March 2026 02:51:07 +0000 (0:00:00.621) 0:00:08.384 ********* 2026-03-31 02:51:13.615403 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:51:13.615412 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:51:13.615422 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:51:13.615431 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:51:13.615441 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:51:13.615465 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:51:13.615475 | orchestrator | 2026-03-31 02:51:13.615485 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-31 02:51:13.615494 | orchestrator | Tuesday 31 March 2026 02:51:08 +0000 (0:00:00.798) 0:00:09.182 ********* 2026-03-31 02:51:13.615504 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 02:51:13.615514 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 02:51:13.615523 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 02:51:13.615533 | orchestrator | 2026-03-31 02:51:13.615542 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-31 02:51:13.615552 | orchestrator | Tuesday 31 March 2026 02:51:08 +0000 (0:00:00.677) 0:00:09.860 ********* 2026-03-31 02:51:13.615571 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:51:13.615581 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:51:13.615590 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:51:13.615619 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:51:13.615629 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:51:13.615639 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:51:13.615648 | orchestrator | 2026-03-31 02:51:13.615658 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-31 02:51:13.615668 | orchestrator | Tuesday 31 March 2026 02:51:09 +0000 (0:00:00.745) 0:00:10.606 ********* 2026-03-31 02:51:13.615677 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 02:51:13.615687 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 02:51:13.615697 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 02:51:13.615764 | orchestrator | 2026-03-31 02:51:13.615775 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-31 02:51:13.615785 | orchestrator | Tuesday 31 March 2026 02:51:12 +0000 (0:00:02.439) 0:00:13.046 ********* 2026-03-31 02:51:13.615794 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-31 02:51:13.615805 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-31 02:51:13.615815 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-31 02:51:13.615824 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:51:13.615834 | orchestrator | 2026-03-31 02:51:13.615844 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-31 02:51:13.615853 | orchestrator | Tuesday 31 March 2026 02:51:12 +0000 (0:00:00.423) 0:00:13.469 ********* 2026-03-31 02:51:13.615865 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-31 02:51:13.615878 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-31 02:51:13.615888 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-31 02:51:13.615897 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:51:13.615907 | orchestrator | 2026-03-31 02:51:13.615916 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-31 02:51:13.615926 | orchestrator | Tuesday 31 March 2026 02:51:13 +0000 (0:00:00.628) 0:00:14.098 ********* 2026-03-31 02:51:13.615937 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:13.615951 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:13.615961 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:13.615979 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:51:13.615989 | orchestrator | 2026-03-31 02:51:13.616004 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-31 02:51:13.616014 | orchestrator | Tuesday 31 March 2026 02:51:13 +0000 (0:00:00.176) 0:00:14.275 ********* 2026-03-31 02:51:13.616035 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-31 02:51:10.649415', 'end': '2026-03-31 02:51:10.703237', 'delta': '0:00:00.053822', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-31 02:51:24.430380 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-31 02:51:11.201309', 'end': '2026-03-31 02:51:11.244762', 'delta': '0:00:00.043453', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-31 02:51:24.430491 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-31 02:51:11.741399', 'end': '2026-03-31 02:51:11.781400', 'delta': '0:00:00.040001', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-31 02:51:24.430508 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:51:24.430522 | orchestrator | 2026-03-31 02:51:24.430535 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-31 02:51:24.430547 | orchestrator | Tuesday 31 March 2026 02:51:13 +0000 (0:00:00.196) 0:00:14.471 ********* 2026-03-31 02:51:24.430558 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:51:24.430570 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:51:24.430581 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:51:24.430592 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:51:24.430602 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:51:24.430613 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:51:24.430624 | orchestrator | 2026-03-31 02:51:24.430635 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-31 02:51:24.430645 | orchestrator | Tuesday 31 March 2026 02:51:14 +0000 (0:00:00.819) 0:00:15.291 ********* 2026-03-31 02:51:24.430656 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-31 02:51:24.430667 | orchestrator | 2026-03-31 02:51:24.430678 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-31 02:51:24.430689 | orchestrator | Tuesday 31 March 2026 02:51:15 +0000 (0:00:00.886) 0:00:16.177 ********* 2026-03-31 02:51:24.430786 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:51:24.430801 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:51:24.430812 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:51:24.430822 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:51:24.430833 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:51:24.430843 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:51:24.430854 | orchestrator | 2026-03-31 02:51:24.430865 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-31 02:51:24.430876 | orchestrator | Tuesday 31 March 2026 02:51:16 +0000 (0:00:01.049) 0:00:17.227 ********* 2026-03-31 02:51:24.430887 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:51:24.430897 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:51:24.430909 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:51:24.430922 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:51:24.430935 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:51:24.430946 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:51:24.430959 | orchestrator | 2026-03-31 02:51:24.430971 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-31 02:51:24.430984 | orchestrator | Tuesday 31 March 2026 02:51:17 +0000 (0:00:01.218) 0:00:18.445 ********* 2026-03-31 02:51:24.430996 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:51:24.431008 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:51:24.431019 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:51:24.431032 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:51:24.431044 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:51:24.431070 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:51:24.431083 | orchestrator | 2026-03-31 02:51:24.431096 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-31 02:51:24.431108 | orchestrator | Tuesday 31 March 2026 02:51:18 +0000 (0:00:00.643) 0:00:19.088 ********* 2026-03-31 02:51:24.431120 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:51:24.431133 | orchestrator | 2026-03-31 02:51:24.431145 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-31 02:51:24.431157 | orchestrator | Tuesday 31 March 2026 02:51:18 +0000 (0:00:00.172) 0:00:19.261 ********* 2026-03-31 02:51:24.431170 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:51:24.431181 | orchestrator | 2026-03-31 02:51:24.431194 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-31 02:51:24.431206 | orchestrator | Tuesday 31 March 2026 02:51:18 +0000 (0:00:00.243) 0:00:19.505 ********* 2026-03-31 02:51:24.431218 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:51:24.431231 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:51:24.431243 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:51:24.431255 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:51:24.431267 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:51:24.431279 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:51:24.431290 | orchestrator | 2026-03-31 02:51:24.431319 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-31 02:51:24.431331 | orchestrator | Tuesday 31 March 2026 02:51:19 +0000 (0:00:00.827) 0:00:20.333 ********* 2026-03-31 02:51:24.431342 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:51:24.431352 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:51:24.431363 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:51:24.431373 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:51:24.431384 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:51:24.431394 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:51:24.431406 | orchestrator | 2026-03-31 02:51:24.431425 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-31 02:51:24.431443 | orchestrator | Tuesday 31 March 2026 02:51:20 +0000 (0:00:00.838) 0:00:21.171 ********* 2026-03-31 02:51:24.431459 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:51:24.431476 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:51:24.431493 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:51:24.431524 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:51:24.431539 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:51:24.431554 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:51:24.431570 | orchestrator | 2026-03-31 02:51:24.431586 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-31 02:51:24.431601 | orchestrator | Tuesday 31 March 2026 02:51:21 +0000 (0:00:00.947) 0:00:22.119 ********* 2026-03-31 02:51:24.431618 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:51:24.431634 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:51:24.431651 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:51:24.431668 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:51:24.431685 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:51:24.431701 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:51:24.431749 | orchestrator | 2026-03-31 02:51:24.431768 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-31 02:51:24.431785 | orchestrator | Tuesday 31 March 2026 02:51:21 +0000 (0:00:00.683) 0:00:22.803 ********* 2026-03-31 02:51:24.431803 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:51:24.431820 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:51:24.431839 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:51:24.431859 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:51:24.431876 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:51:24.431895 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:51:24.431907 | orchestrator | 2026-03-31 02:51:24.431918 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-31 02:51:24.431929 | orchestrator | Tuesday 31 March 2026 02:51:22 +0000 (0:00:00.855) 0:00:23.659 ********* 2026-03-31 02:51:24.431939 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:51:24.431949 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:51:24.431960 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:51:24.431970 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:51:24.431981 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:51:24.431991 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:51:24.432002 | orchestrator | 2026-03-31 02:51:24.432013 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-31 02:51:24.432024 | orchestrator | Tuesday 31 March 2026 02:51:23 +0000 (0:00:00.649) 0:00:24.308 ********* 2026-03-31 02:51:24.432035 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:51:24.432045 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:51:24.432056 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:51:24.432066 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:51:24.432077 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:51:24.432087 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:51:24.432098 | orchestrator | 2026-03-31 02:51:24.432109 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-31 02:51:24.432119 | orchestrator | Tuesday 31 March 2026 02:51:24 +0000 (0:00:00.843) 0:00:25.152 ********* 2026-03-31 02:51:24.432132 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dad98f55--09f4--5a2b--a5c7--aafce2660c53-osd--block--dad98f55--09f4--5a2b--a5c7--aafce2660c53', 'dm-uuid-LVM-3PGokd0XE9nIVZhiheUbxNcBNNscsDrxttbUQtJ3i25YBfd39yc024Mn1ftAcrtm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:24.432155 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--67174221--9040--517a--ae84--daf8ebd704d7-osd--block--67174221--9040--517a--ae84--daf8ebd704d7', 'dm-uuid-LVM-KejqHBdnFtLSyyC9R84nyz1yANxrpRIXzilsodjHoTjpW17LoAebYG18loNV682y'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:24.432191 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:24.554427 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:24.554555 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:24.554582 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:24.554617 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:24.554631 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:24.554642 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:24.554654 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:24.554773 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part1', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part14', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part15', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part16', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-31 02:51:24.554844 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--dad98f55--09f4--5a2b--a5c7--aafce2660c53-osd--block--dad98f55--09f4--5a2b--a5c7--aafce2660c53'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lFSq2g-b3FP-rBDh-oytj-DsQd-47zI-8ZR1ba', 'scsi-0QEMU_QEMU_HARDDISK_820fa545-b298-47e1-b072-447ef233e5c9', 'scsi-SQEMU_QEMU_HARDDISK_820fa545-b298-47e1-b072-447ef233e5c9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-31 02:51:24.554872 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--67174221--9040--517a--ae84--daf8ebd704d7-osd--block--67174221--9040--517a--ae84--daf8ebd704d7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ysmeMC-hqe7-I7iJ-JTkz-gYYz-B5UB-UbMPzu', 'scsi-0QEMU_QEMU_HARDDISK_c466d3ef-6614-47a1-86d1-ef83336ce84c', 'scsi-SQEMU_QEMU_HARDDISK_c466d3ef-6614-47a1-86d1-ef83336ce84c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-31 02:51:24.554894 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a878a648-90f8-45a8-8930-74e801ae2e4e', 'scsi-SQEMU_QEMU_HARDDISK_a878a648-90f8-45a8-8930-74e801ae2e4e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-31 02:51:24.554935 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb-osd--block--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb', 'dm-uuid-LVM-RwD1SDPPywNrcOLsCdJUWJCkPqisEw7IjN9YwlXbnLhNiiunicnne9TiGAxFnCN2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:24.554970 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-31-01-38-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-31 02:51:24.711599 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--da0b55d5--13d5--528b--aee2--5667f342587c-osd--block--da0b55d5--13d5--528b--aee2--5667f342587c', 'dm-uuid-LVM-voIvMScBNf0nn1UqP6J3mrL57Feo8hpsEfbBIXBLL2lbnvB5fpXdf3Vs7Oc4nA8j'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:24.711703 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:24.711756 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:24.711769 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:24.711781 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:24.711792 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:24.711842 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:24.711859 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:51:24.711879 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:24.711898 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:24.712011 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part1', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part14', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part15', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part16', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-31 02:51:24.712032 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb-osd--block--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jppFpT-6287-H5UX-wadw-idvL-aDwi-H3fsQH', 'scsi-0QEMU_QEMU_HARDDISK_627ac388-afe2-405e-bfb6-93a96eeb5247', 'scsi-SQEMU_QEMU_HARDDISK_627ac388-afe2-405e-bfb6-93a96eeb5247'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-31 02:51:24.712159 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--da0b55d5--13d5--528b--aee2--5667f342587c-osd--block--da0b55d5--13d5--528b--aee2--5667f342587c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-pfZnnD-Ultt-g92I-R3gj-okuR-Ezub-rBAf3f', 'scsi-0QEMU_QEMU_HARDDISK_aca90cda-810a-4a3a-a8a4-a9246b552814', 'scsi-SQEMU_QEMU_HARDDISK_aca90cda-810a-4a3a-a8a4-a9246b552814'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-31 02:51:24.712189 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a64e844-a251-4ee7-a817-d55da64d6351', 'scsi-SQEMU_QEMU_HARDDISK_5a64e844-a251-4ee7-a817-d55da64d6351'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-31 02:51:24.924332 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-31-01-38-47-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-31 02:51:24.924465 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--07ced279--a583--5107--8220--95f80fc10ac7-osd--block--07ced279--a583--5107--8220--95f80fc10ac7', 'dm-uuid-LVM-4Lb9QdMZv1ai74sfHiNB7SWQCThlMxSwyKTWsVenR44CqY2klBeRO2fR5AXJ6GI1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:24.924494 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--185c377e--da3e--5428--98db--747be321d2f9-osd--block--185c377e--da3e--5428--98db--747be321d2f9', 'dm-uuid-LVM-x16wR0JSkJwOUat6KB2RjtOnd6k2ruBp3Senp6or7C3BHvrbv8KuFHdSdmwvdICC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:24.924517 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:24.924574 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:24.924615 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:24.924635 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:24.924654 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:24.924701 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:24.924754 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:24.924774 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:24.924805 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part1', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part14', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part15', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part16', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-31 02:51:24.924846 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--07ced279--a583--5107--8220--95f80fc10ac7-osd--block--07ced279--a583--5107--8220--95f80fc10ac7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bwm83I-k31i-pwme-XT9I-9Z0g-1hP0-CwgXOd', 'scsi-0QEMU_QEMU_HARDDISK_cee620fc-9fd6-4c5e-b237-9b955e0088ae', 'scsi-SQEMU_QEMU_HARDDISK_cee620fc-9fd6-4c5e-b237-9b955e0088ae'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-31 02:51:24.924868 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:51:24.924905 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--185c377e--da3e--5428--98db--747be321d2f9-osd--block--185c377e--da3e--5428--98db--747be321d2f9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zgTsa4-r5F1-H4rU-9oqC-nOys-qaba-d4ei1Y', 'scsi-0QEMU_QEMU_HARDDISK_0036be6c-41d0-4a1c-804a-c8bed222bda7', 'scsi-SQEMU_QEMU_HARDDISK_0036be6c-41d0-4a1c-804a-c8bed222bda7'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-31 02:51:25.142089 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1382055-b12a-4a0d-90b0-6b0bf5b2002d', 'scsi-SQEMU_QEMU_HARDDISK_d1382055-b12a-4a0d-90b0-6b0bf5b2002d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-31 02:51:25.142171 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-31-01-38-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-31 02:51:25.142202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:25.142212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:25.142230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:25.142237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:25.142243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:25.142250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:25.142270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:25.142278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:25.142291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part1', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part14', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part15', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part16', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-31 02:51:25.142304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-31-01-38-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-31 02:51:25.142312 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:51:25.142319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:25.142326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:25.142337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:25.387042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:25.387143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:25.387151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:25.387156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:25.387170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:25.387191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e', 'scsi-SQEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part1', 'scsi-SQEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part14', 'scsi-SQEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part15', 'scsi-SQEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part16', 'scsi-SQEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-31 02:51:25.387204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-31-01-38-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-31 02:51:25.387210 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:51:25.387215 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:51:25.387220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:25.387225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:25.387232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:25.387236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:25.387241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:25.387245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:25.387250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:25.387260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 02:51:25.620085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4', 'scsi-SQEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part1', 'scsi-SQEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part14', 'scsi-SQEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part15', 'scsi-SQEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part16', 'scsi-SQEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-31 02:51:25.620159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-31-01-38-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-31 02:51:25.620168 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:51:25.620174 | orchestrator | 2026-03-31 02:51:25.620180 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-31 02:51:25.620185 | orchestrator | Tuesday 31 March 2026 02:51:25 +0000 (0:00:01.096) 0:00:26.248 ********* 2026-03-31 02:51:25.620197 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dad98f55--09f4--5a2b--a5c7--aafce2660c53-osd--block--dad98f55--09f4--5a2b--a5c7--aafce2660c53', 'dm-uuid-LVM-3PGokd0XE9nIVZhiheUbxNcBNNscsDrxttbUQtJ3i25YBfd39yc024Mn1ftAcrtm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:25.620241 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--67174221--9040--517a--ae84--daf8ebd704d7-osd--block--67174221--9040--517a--ae84--daf8ebd704d7', 'dm-uuid-LVM-KejqHBdnFtLSyyC9R84nyz1yANxrpRIXzilsodjHoTjpW17LoAebYG18loNV682y'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:25.620248 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:25.620255 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:25.620264 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:25.620269 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:25.620274 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:25.620283 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:25.620292 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:25.978968 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:25.979117 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part1', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part14', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part15', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part16', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:25.979180 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb-osd--block--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb', 'dm-uuid-LVM-RwD1SDPPywNrcOLsCdJUWJCkPqisEw7IjN9YwlXbnLhNiiunicnne9TiGAxFnCN2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:25.979229 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--dad98f55--09f4--5a2b--a5c7--aafce2660c53-osd--block--dad98f55--09f4--5a2b--a5c7--aafce2660c53'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lFSq2g-b3FP-rBDh-oytj-DsQd-47zI-8ZR1ba', 'scsi-0QEMU_QEMU_HARDDISK_820fa545-b298-47e1-b072-447ef233e5c9', 'scsi-SQEMU_QEMU_HARDDISK_820fa545-b298-47e1-b072-447ef233e5c9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:25.979258 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--da0b55d5--13d5--528b--aee2--5667f342587c-osd--block--da0b55d5--13d5--528b--aee2--5667f342587c', 'dm-uuid-LVM-voIvMScBNf0nn1UqP6J3mrL57Feo8hpsEfbBIXBLL2lbnvB5fpXdf3Vs7Oc4nA8j'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:25.979278 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--67174221--9040--517a--ae84--daf8ebd704d7-osd--block--67174221--9040--517a--ae84--daf8ebd704d7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ysmeMC-hqe7-I7iJ-JTkz-gYYz-B5UB-UbMPzu', 'scsi-0QEMU_QEMU_HARDDISK_c466d3ef-6614-47a1-86d1-ef83336ce84c', 'scsi-SQEMU_QEMU_HARDDISK_c466d3ef-6614-47a1-86d1-ef83336ce84c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:25.979297 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:25.979329 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a878a648-90f8-45a8-8930-74e801ae2e4e', 'scsi-SQEMU_QEMU_HARDDISK_a878a648-90f8-45a8-8930-74e801ae2e4e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:25.979364 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:25.987479 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-31-01-38-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:25.987590 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:25.987605 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:25.987617 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:25.987647 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:25.987658 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:25.987687 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:25.987769 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part1', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part14', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part15', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part16', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:25.987794 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb-osd--block--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jppFpT-6287-H5UX-wadw-idvL-aDwi-H3fsQH', 'scsi-0QEMU_QEMU_HARDDISK_627ac388-afe2-405e-bfb6-93a96eeb5247', 'scsi-SQEMU_QEMU_HARDDISK_627ac388-afe2-405e-bfb6-93a96eeb5247'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:25.987815 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--da0b55d5--13d5--528b--aee2--5667f342587c-osd--block--da0b55d5--13d5--528b--aee2--5667f342587c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-pfZnnD-Ultt-g92I-R3gj-okuR-Ezub-rBAf3f', 'scsi-0QEMU_QEMU_HARDDISK_aca90cda-810a-4a3a-a8a4-a9246b552814', 'scsi-SQEMU_QEMU_HARDDISK_aca90cda-810a-4a3a-a8a4-a9246b552814'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:26.230753 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a64e844-a251-4ee7-a817-d55da64d6351', 'scsi-SQEMU_QEMU_HARDDISK_5a64e844-a251-4ee7-a817-d55da64d6351'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:26.230873 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-31-01-38-47-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:26.230927 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:51:26.230948 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--07ced279--a583--5107--8220--95f80fc10ac7-osd--block--07ced279--a583--5107--8220--95f80fc10ac7', 'dm-uuid-LVM-4Lb9QdMZv1ai74sfHiNB7SWQCThlMxSwyKTWsVenR44CqY2klBeRO2fR5AXJ6GI1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:26.230964 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--185c377e--da3e--5428--98db--747be321d2f9-osd--block--185c377e--da3e--5428--98db--747be321d2f9', 'dm-uuid-LVM-x16wR0JSkJwOUat6KB2RjtOnd6k2ruBp3Senp6or7C3BHvrbv8KuFHdSdmwvdICC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:26.230975 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:26.231006 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:26.231017 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:51:26.231036 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:26.231055 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:26.231083 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:26.231100 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:26.231119 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:26.231136 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:26.231170 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:26.265062 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:26.265179 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:26.265194 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:26.265229 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part1', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part14', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part15', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part16', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:26.265245 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:26.265264 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:26.265276 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--07ced279--a583--5107--8220--95f80fc10ac7-osd--block--07ced279--a583--5107--8220--95f80fc10ac7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bwm83I-k31i-pwme-XT9I-9Z0g-1hP0-CwgXOd', 'scsi-0QEMU_QEMU_HARDDISK_cee620fc-9fd6-4c5e-b237-9b955e0088ae', 'scsi-SQEMU_QEMU_HARDDISK_cee620fc-9fd6-4c5e-b237-9b955e0088ae'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:26.265289 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:26.265386 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--185c377e--da3e--5428--98db--747be321d2f9-osd--block--185c377e--da3e--5428--98db--747be321d2f9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zgTsa4-r5F1-H4rU-9oqC-nOys-qaba-d4ei1Y', 'scsi-0QEMU_QEMU_HARDDISK_0036be6c-41d0-4a1c-804a-c8bed222bda7', 'scsi-SQEMU_QEMU_HARDDISK_0036be6c-41d0-4a1c-804a-c8bed222bda7'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:26.265421 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:26.503362 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1382055-b12a-4a0d-90b0-6b0bf5b2002d', 'scsi-SQEMU_QEMU_HARDDISK_d1382055-b12a-4a0d-90b0-6b0bf5b2002d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:26.503442 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part1', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part14', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part15', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part16', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:26.503465 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-31-01-38-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:26.503489 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-31-01-38-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:26.503512 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:51:26.503520 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:26.503527 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:26.503533 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:26.503539 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:26.503544 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:26.503558 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:26.747227 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:26.747362 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:26.747391 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e', 'scsi-SQEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part1', 'scsi-SQEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part14', 'scsi-SQEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part15', 'scsi-SQEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part16', 'scsi-SQEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:26.747473 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-31-01-38-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:26.747487 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:51:26.747497 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:51:26.747506 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:26.747516 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:26.747525 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:26.747534 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:26.747543 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:26.747563 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:26.747581 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:34.294395 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:34.295252 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4', 'scsi-SQEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part1', 'scsi-SQEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part14', 'scsi-SQEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part15', 'scsi-SQEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part16', 'scsi-SQEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:34.295302 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-31-01-38-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 02:51:34.295309 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:51:34.295316 | orchestrator | 2026-03-31 02:51:34.295321 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-31 02:51:34.295327 | orchestrator | Tuesday 31 March 2026 02:51:26 +0000 (0:00:01.357) 0:00:27.606 ********* 2026-03-31 02:51:34.295331 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:51:34.295337 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:51:34.295341 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:51:34.295346 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:51:34.295350 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:51:34.295353 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:51:34.295357 | orchestrator | 2026-03-31 02:51:34.295374 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-31 02:51:34.295378 | orchestrator | Tuesday 31 March 2026 02:51:27 +0000 (0:00:00.962) 0:00:28.569 ********* 2026-03-31 02:51:34.295382 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:51:34.295386 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:51:34.295389 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:51:34.295393 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:51:34.295397 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:51:34.295400 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:51:34.295404 | orchestrator | 2026-03-31 02:51:34.295408 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-31 02:51:34.295412 | orchestrator | Tuesday 31 March 2026 02:51:28 +0000 (0:00:00.945) 0:00:29.515 ********* 2026-03-31 02:51:34.295415 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:51:34.295419 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:51:34.295423 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:51:34.295427 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:51:34.295430 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:51:34.295434 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:51:34.295438 | orchestrator | 2026-03-31 02:51:34.295442 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-31 02:51:34.295446 | orchestrator | Tuesday 31 March 2026 02:51:29 +0000 (0:00:00.611) 0:00:30.126 ********* 2026-03-31 02:51:34.295449 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:51:34.295453 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:51:34.295457 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:51:34.295461 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:51:34.295464 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:51:34.295468 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:51:34.295472 | orchestrator | 2026-03-31 02:51:34.295475 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-31 02:51:34.295479 | orchestrator | Tuesday 31 March 2026 02:51:30 +0000 (0:00:00.903) 0:00:31.030 ********* 2026-03-31 02:51:34.295483 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:51:34.295487 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:51:34.295490 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:51:34.295498 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:51:34.295502 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:51:34.295505 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:51:34.295509 | orchestrator | 2026-03-31 02:51:34.295513 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-31 02:51:34.295517 | orchestrator | Tuesday 31 March 2026 02:51:30 +0000 (0:00:00.672) 0:00:31.702 ********* 2026-03-31 02:51:34.295520 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:51:34.295524 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:51:34.295528 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:51:34.295531 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:51:34.295535 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:51:34.295539 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:51:34.295542 | orchestrator | 2026-03-31 02:51:34.295546 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-31 02:51:34.295550 | orchestrator | Tuesday 31 March 2026 02:51:31 +0000 (0:00:00.881) 0:00:32.584 ********* 2026-03-31 02:51:34.295554 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-31 02:51:34.295558 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-31 02:51:34.295562 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-31 02:51:34.295566 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-31 02:51:34.295569 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-31 02:51:34.295573 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-31 02:51:34.295577 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-31 02:51:34.295581 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-31 02:51:34.295584 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-31 02:51:34.295588 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-31 02:51:34.295592 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-31 02:51:34.295596 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-31 02:51:34.295599 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-31 02:51:34.295603 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-31 02:51:34.295607 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-31 02:51:34.295611 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-31 02:51:34.295614 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-31 02:51:34.295621 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-31 02:51:34.295625 | orchestrator | 2026-03-31 02:51:34.295629 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-31 02:51:34.295632 | orchestrator | Tuesday 31 March 2026 02:51:33 +0000 (0:00:01.830) 0:00:34.415 ********* 2026-03-31 02:51:34.295636 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-31 02:51:34.295641 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-31 02:51:34.295645 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-31 02:51:34.295648 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:51:34.295652 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-31 02:51:34.295656 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-31 02:51:34.295659 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-31 02:51:34.295663 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:51:34.295667 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-31 02:51:34.295671 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-31 02:51:34.295674 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-31 02:51:34.295678 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:51:34.295682 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-31 02:51:34.295686 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-31 02:51:34.295695 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-31 02:51:51.593203 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:51:51.593345 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-31 02:51:51.593371 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-31 02:51:51.593388 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-31 02:51:51.593408 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:51:51.593425 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-31 02:51:51.593442 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-31 02:51:51.593460 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-31 02:51:51.593477 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:51:51.593495 | orchestrator | 2026-03-31 02:51:51.593513 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-31 02:51:51.593532 | orchestrator | Tuesday 31 March 2026 02:51:34 +0000 (0:00:01.049) 0:00:35.464 ********* 2026-03-31 02:51:51.593550 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:51:51.593568 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:51:51.593586 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:51:51.593604 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 02:51:51.593622 | orchestrator | 2026-03-31 02:51:51.593640 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-31 02:51:51.593658 | orchestrator | Tuesday 31 March 2026 02:51:35 +0000 (0:00:01.110) 0:00:36.575 ********* 2026-03-31 02:51:51.593676 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:51:51.593693 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:51:51.593712 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:51:51.593729 | orchestrator | 2026-03-31 02:51:51.593786 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-31 02:51:51.593807 | orchestrator | Tuesday 31 March 2026 02:51:36 +0000 (0:00:00.437) 0:00:37.013 ********* 2026-03-31 02:51:51.593828 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:51:51.593846 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:51:51.593880 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:51:51.593898 | orchestrator | 2026-03-31 02:51:51.593915 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-31 02:51:51.593928 | orchestrator | Tuesday 31 March 2026 02:51:36 +0000 (0:00:00.352) 0:00:37.366 ********* 2026-03-31 02:51:51.593945 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:51:51.593968 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:51:51.593992 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:51:51.594015 | orchestrator | 2026-03-31 02:51:51.594125 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-31 02:51:51.594152 | orchestrator | Tuesday 31 March 2026 02:51:37 +0000 (0:00:00.562) 0:00:37.928 ********* 2026-03-31 02:51:51.594177 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:51:51.594204 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:51:51.594229 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:51:51.594253 | orchestrator | 2026-03-31 02:51:51.594277 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-31 02:51:51.594301 | orchestrator | Tuesday 31 March 2026 02:51:37 +0000 (0:00:00.491) 0:00:38.419 ********* 2026-03-31 02:51:51.594325 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-31 02:51:51.594350 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-31 02:51:51.594373 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-31 02:51:51.594399 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:51:51.594425 | orchestrator | 2026-03-31 02:51:51.594451 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-31 02:51:51.594518 | orchestrator | Tuesday 31 March 2026 02:51:37 +0000 (0:00:00.389) 0:00:38.809 ********* 2026-03-31 02:51:51.594544 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-31 02:51:51.594569 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-31 02:51:51.594594 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-31 02:51:51.594618 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:51:51.594637 | orchestrator | 2026-03-31 02:51:51.594651 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-31 02:51:51.594665 | orchestrator | Tuesday 31 March 2026 02:51:38 +0000 (0:00:00.404) 0:00:39.213 ********* 2026-03-31 02:51:51.594694 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-31 02:51:51.594707 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-31 02:51:51.594719 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-31 02:51:51.594752 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:51:51.594766 | orchestrator | 2026-03-31 02:51:51.594779 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-31 02:51:51.594791 | orchestrator | Tuesday 31 March 2026 02:51:38 +0000 (0:00:00.442) 0:00:39.656 ********* 2026-03-31 02:51:51.594803 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:51:51.594815 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:51:51.594827 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:51:51.594839 | orchestrator | 2026-03-31 02:51:51.594851 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-31 02:51:51.594862 | orchestrator | Tuesday 31 March 2026 02:51:39 +0000 (0:00:00.384) 0:00:40.040 ********* 2026-03-31 02:51:51.594875 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-31 02:51:51.594888 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-31 02:51:51.594901 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-31 02:51:51.594913 | orchestrator | 2026-03-31 02:51:51.594927 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-31 02:51:51.594941 | orchestrator | Tuesday 31 March 2026 02:51:40 +0000 (0:00:01.090) 0:00:41.131 ********* 2026-03-31 02:51:51.594954 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 02:51:51.594994 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 02:51:51.595009 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 02:51:51.595022 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-31 02:51:51.595036 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-31 02:51:51.595049 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-31 02:51:51.595063 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-31 02:51:51.595076 | orchestrator | 2026-03-31 02:51:51.595089 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-31 02:51:51.595102 | orchestrator | Tuesday 31 March 2026 02:51:41 +0000 (0:00:00.885) 0:00:42.017 ********* 2026-03-31 02:51:51.595115 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 02:51:51.595129 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 02:51:51.595148 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 02:51:51.595167 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-31 02:51:51.595187 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-31 02:51:51.595206 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-31 02:51:51.595222 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-31 02:51:51.595240 | orchestrator | 2026-03-31 02:51:51.595257 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-31 02:51:51.595292 | orchestrator | Tuesday 31 March 2026 02:51:43 +0000 (0:00:02.017) 0:00:44.035 ********* 2026-03-31 02:51:51.595311 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:51:51.595330 | orchestrator | 2026-03-31 02:51:51.595346 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-31 02:51:51.595365 | orchestrator | Tuesday 31 March 2026 02:51:44 +0000 (0:00:01.314) 0:00:45.349 ********* 2026-03-31 02:51:51.595382 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:51:51.595399 | orchestrator | 2026-03-31 02:51:51.595419 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-31 02:51:51.595436 | orchestrator | Tuesday 31 March 2026 02:51:45 +0000 (0:00:01.378) 0:00:46.727 ********* 2026-03-31 02:51:51.595455 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:51:51.595473 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:51:51.595490 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:51:51.595508 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:51:51.595526 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:51:51.595542 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:51:51.595558 | orchestrator | 2026-03-31 02:51:51.595575 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-31 02:51:51.595592 | orchestrator | Tuesday 31 March 2026 02:51:47 +0000 (0:00:01.349) 0:00:48.077 ********* 2026-03-31 02:51:51.595608 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:51:51.595626 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:51:51.595642 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:51:51.595659 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:51:51.595674 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:51:51.595691 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:51:51.595708 | orchestrator | 2026-03-31 02:51:51.595726 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-31 02:51:51.595772 | orchestrator | Tuesday 31 March 2026 02:51:47 +0000 (0:00:00.704) 0:00:48.782 ********* 2026-03-31 02:51:51.595785 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:51:51.595797 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:51:51.595809 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:51:51.595828 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:51:51.595849 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:51:51.595870 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:51:51.595891 | orchestrator | 2026-03-31 02:51:51.595926 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-31 02:51:51.595949 | orchestrator | Tuesday 31 March 2026 02:51:48 +0000 (0:00:00.946) 0:00:49.728 ********* 2026-03-31 02:51:51.595972 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:51:51.595995 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:51:51.596018 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:51:51.596038 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:51:51.596062 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:51:51.596084 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:51:51.596105 | orchestrator | 2026-03-31 02:51:51.596129 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-31 02:51:51.596149 | orchestrator | Tuesday 31 March 2026 02:51:49 +0000 (0:00:00.731) 0:00:50.459 ********* 2026-03-31 02:51:51.596162 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:51:51.596182 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:51:51.596204 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:51:51.596224 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:51:51.596246 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:51:51.596266 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:51:51.596286 | orchestrator | 2026-03-31 02:51:51.596306 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-31 02:51:51.596345 | orchestrator | Tuesday 31 March 2026 02:51:50 +0000 (0:00:01.361) 0:00:51.820 ********* 2026-03-31 02:51:51.596367 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:51:51.596388 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:51:51.596408 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:51:51.596428 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:51:51.596466 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:52:13.159956 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:52:13.160058 | orchestrator | 2026-03-31 02:52:13.160069 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-31 02:52:13.160075 | orchestrator | Tuesday 31 March 2026 02:51:51 +0000 (0:00:00.633) 0:00:52.454 ********* 2026-03-31 02:52:13.160080 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:52:13.160085 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:52:13.160090 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:52:13.160094 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:52:13.160099 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:52:13.160106 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:52:13.160112 | orchestrator | 2026-03-31 02:52:13.160116 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-31 02:52:13.160121 | orchestrator | Tuesday 31 March 2026 02:51:52 +0000 (0:00:00.934) 0:00:53.388 ********* 2026-03-31 02:52:13.160125 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:52:13.160131 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:52:13.160136 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:52:13.160140 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:52:13.160144 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:52:13.160149 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:52:13.160153 | orchestrator | 2026-03-31 02:52:13.160157 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-31 02:52:13.160161 | orchestrator | Tuesday 31 March 2026 02:51:53 +0000 (0:00:01.154) 0:00:54.543 ********* 2026-03-31 02:52:13.160166 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:52:13.160170 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:52:13.160174 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:52:13.160178 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:52:13.160183 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:52:13.160187 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:52:13.160191 | orchestrator | 2026-03-31 02:52:13.160195 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-31 02:52:13.160200 | orchestrator | Tuesday 31 March 2026 02:51:55 +0000 (0:00:01.386) 0:00:55.929 ********* 2026-03-31 02:52:13.160204 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:52:13.160209 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:52:13.160213 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:52:13.160217 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:52:13.160222 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:52:13.160229 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:52:13.160235 | orchestrator | 2026-03-31 02:52:13.160242 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-31 02:52:13.160246 | orchestrator | Tuesday 31 March 2026 02:51:55 +0000 (0:00:00.639) 0:00:56.569 ********* 2026-03-31 02:52:13.160251 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:52:13.160257 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:52:13.160264 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:52:13.160268 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:52:13.160272 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:52:13.160277 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:52:13.160281 | orchestrator | 2026-03-31 02:52:13.160285 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-31 02:52:13.160289 | orchestrator | Tuesday 31 March 2026 02:51:56 +0000 (0:00:00.913) 0:00:57.483 ********* 2026-03-31 02:52:13.160294 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:52:13.160298 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:52:13.160318 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:52:13.160323 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:52:13.160327 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:52:13.160332 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:52:13.160336 | orchestrator | 2026-03-31 02:52:13.160340 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-31 02:52:13.160344 | orchestrator | Tuesday 31 March 2026 02:51:57 +0000 (0:00:00.656) 0:00:58.140 ********* 2026-03-31 02:52:13.160349 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:52:13.160353 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:52:13.160357 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:52:13.160361 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:52:13.160366 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:52:13.160370 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:52:13.160374 | orchestrator | 2026-03-31 02:52:13.160378 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-31 02:52:13.160383 | orchestrator | Tuesday 31 March 2026 02:51:58 +0000 (0:00:00.919) 0:00:59.059 ********* 2026-03-31 02:52:13.160387 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:52:13.160391 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:52:13.160395 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:52:13.160400 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:52:13.160404 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:52:13.160418 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:52:13.160422 | orchestrator | 2026-03-31 02:52:13.160427 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-31 02:52:13.160431 | orchestrator | Tuesday 31 March 2026 02:51:58 +0000 (0:00:00.628) 0:00:59.688 ********* 2026-03-31 02:52:13.160435 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:52:13.160439 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:52:13.160444 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:52:13.160448 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:52:13.160452 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:52:13.160456 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:52:13.160461 | orchestrator | 2026-03-31 02:52:13.160465 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-31 02:52:13.160469 | orchestrator | Tuesday 31 March 2026 02:51:59 +0000 (0:00:00.870) 0:01:00.558 ********* 2026-03-31 02:52:13.160473 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:52:13.160478 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:52:13.160482 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:52:13.160486 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:52:13.160491 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:52:13.160496 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:52:13.160501 | orchestrator | 2026-03-31 02:52:13.160506 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-31 02:52:13.160511 | orchestrator | Tuesday 31 March 2026 02:52:00 +0000 (0:00:00.675) 0:01:01.233 ********* 2026-03-31 02:52:13.160516 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:52:13.160521 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:52:13.160526 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:52:13.160542 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:52:13.160547 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:52:13.160552 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:52:13.160557 | orchestrator | 2026-03-31 02:52:13.160562 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-31 02:52:13.160567 | orchestrator | Tuesday 31 March 2026 02:52:01 +0000 (0:00:00.929) 0:01:02.163 ********* 2026-03-31 02:52:13.160572 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:52:13.160577 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:52:13.160581 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:52:13.160586 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:52:13.160591 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:52:13.160596 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:52:13.160605 | orchestrator | 2026-03-31 02:52:13.160610 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-31 02:52:13.160615 | orchestrator | Tuesday 31 March 2026 02:52:01 +0000 (0:00:00.678) 0:01:02.842 ********* 2026-03-31 02:52:13.160620 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:52:13.160625 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:52:13.160629 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:52:13.160634 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:52:13.160639 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:52:13.160644 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:52:13.160649 | orchestrator | 2026-03-31 02:52:13.160656 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-31 02:52:13.160664 | orchestrator | Tuesday 31 March 2026 02:52:03 +0000 (0:00:01.413) 0:01:04.255 ********* 2026-03-31 02:52:13.160669 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:52:13.160674 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:52:13.160679 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:52:13.160683 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:52:13.160688 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:52:13.160693 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:52:13.160698 | orchestrator | 2026-03-31 02:52:13.160703 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-31 02:52:13.160708 | orchestrator | Tuesday 31 March 2026 02:52:05 +0000 (0:00:01.835) 0:01:06.090 ********* 2026-03-31 02:52:13.160713 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:52:13.160718 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:52:13.160722 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:52:13.160727 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:52:13.160732 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:52:13.160737 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:52:13.160742 | orchestrator | 2026-03-31 02:52:13.160767 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-31 02:52:13.160773 | orchestrator | Tuesday 31 March 2026 02:52:07 +0000 (0:00:02.391) 0:01:08.482 ********* 2026-03-31 02:52:13.160779 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:52:13.160785 | orchestrator | 2026-03-31 02:52:13.160790 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-31 02:52:13.160795 | orchestrator | Tuesday 31 March 2026 02:52:08 +0000 (0:00:01.342) 0:01:09.825 ********* 2026-03-31 02:52:13.160800 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:52:13.160805 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:52:13.160810 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:52:13.160815 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:52:13.160819 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:52:13.160824 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:52:13.160829 | orchestrator | 2026-03-31 02:52:13.160834 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-31 02:52:13.160839 | orchestrator | Tuesday 31 March 2026 02:52:09 +0000 (0:00:00.621) 0:01:10.446 ********* 2026-03-31 02:52:13.160844 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:52:13.160849 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:52:13.160854 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:52:13.160859 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:52:13.160863 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:52:13.160867 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:52:13.160871 | orchestrator | 2026-03-31 02:52:13.160876 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-31 02:52:13.160880 | orchestrator | Tuesday 31 March 2026 02:52:10 +0000 (0:00:00.857) 0:01:11.304 ********* 2026-03-31 02:52:13.160884 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-31 02:52:13.160891 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-31 02:52:13.160900 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-31 02:52:13.160904 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-31 02:52:13.160908 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-31 02:52:13.160913 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-31 02:52:13.160918 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-31 02:52:13.160922 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-31 02:52:13.160926 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-31 02:52:13.160931 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-31 02:52:13.160935 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-31 02:52:13.160939 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-31 02:52:13.160943 | orchestrator | 2026-03-31 02:52:13.160948 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-31 02:52:13.160952 | orchestrator | Tuesday 31 March 2026 02:52:11 +0000 (0:00:01.458) 0:01:12.762 ********* 2026-03-31 02:52:13.160959 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:53:30.466094 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:53:30.466180 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:53:30.466193 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:53:30.466200 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:53:30.466207 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:53:30.466213 | orchestrator | 2026-03-31 02:53:30.466221 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-31 02:53:30.466229 | orchestrator | Tuesday 31 March 2026 02:52:13 +0000 (0:00:01.253) 0:01:14.015 ********* 2026-03-31 02:53:30.466235 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:53:30.466241 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:53:30.466247 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:53:30.466254 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:53:30.466261 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:53:30.466268 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:53:30.466274 | orchestrator | 2026-03-31 02:53:30.466281 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-31 02:53:30.466288 | orchestrator | Tuesday 31 March 2026 02:52:13 +0000 (0:00:00.658) 0:01:14.674 ********* 2026-03-31 02:53:30.466296 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:53:30.466302 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:53:30.466311 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:53:30.466321 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:53:30.466327 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:53:30.466334 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:53:30.466341 | orchestrator | 2026-03-31 02:53:30.466348 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-31 02:53:30.466355 | orchestrator | Tuesday 31 March 2026 02:52:14 +0000 (0:00:00.953) 0:01:15.628 ********* 2026-03-31 02:53:30.466362 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:53:30.466368 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:53:30.466374 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:53:30.466380 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:53:30.466387 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:53:30.466394 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:53:30.466401 | orchestrator | 2026-03-31 02:53:30.466409 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-31 02:53:30.466417 | orchestrator | Tuesday 31 March 2026 02:52:15 +0000 (0:00:00.666) 0:01:16.295 ********* 2026-03-31 02:53:30.466453 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:53:30.466461 | orchestrator | 2026-03-31 02:53:30.466469 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-31 02:53:30.466478 | orchestrator | Tuesday 31 March 2026 02:52:16 +0000 (0:00:01.314) 0:01:17.609 ********* 2026-03-31 02:53:30.466484 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:53:30.466491 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:53:30.466497 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:53:30.466502 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:53:30.466508 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:53:30.466513 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:53:30.466519 | orchestrator | 2026-03-31 02:53:30.466525 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-31 02:53:30.466533 | orchestrator | Tuesday 31 March 2026 02:53:17 +0000 (0:01:00.608) 0:02:18.217 ********* 2026-03-31 02:53:30.466539 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-31 02:53:30.466546 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-31 02:53:30.466552 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-31 02:53:30.466559 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:53:30.466565 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-31 02:53:30.466572 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-31 02:53:30.466577 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-31 02:53:30.466581 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:53:30.466585 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-31 02:53:30.466589 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-31 02:53:30.466603 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-31 02:53:30.466607 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:53:30.466610 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-31 02:53:30.466614 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-31 02:53:30.466618 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-31 02:53:30.466622 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:53:30.466625 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-31 02:53:30.466629 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-31 02:53:30.466633 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-31 02:53:30.466636 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:53:30.466640 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-31 02:53:30.466645 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-31 02:53:30.466649 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-31 02:53:30.466653 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:53:30.466658 | orchestrator | 2026-03-31 02:53:30.466662 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-31 02:53:30.466680 | orchestrator | Tuesday 31 March 2026 02:53:18 +0000 (0:00:00.748) 0:02:18.966 ********* 2026-03-31 02:53:30.466685 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:53:30.466689 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:53:30.466694 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:53:30.466698 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:53:30.466703 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:53:30.466713 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:53:30.466717 | orchestrator | 2026-03-31 02:53:30.466722 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-31 02:53:30.466726 | orchestrator | Tuesday 31 March 2026 02:53:19 +0000 (0:00:00.921) 0:02:19.888 ********* 2026-03-31 02:53:30.466730 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:53:30.466734 | orchestrator | 2026-03-31 02:53:30.466741 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-31 02:53:30.466748 | orchestrator | Tuesday 31 March 2026 02:53:19 +0000 (0:00:00.186) 0:02:20.074 ********* 2026-03-31 02:53:30.466753 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:53:30.466759 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:53:30.466765 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:53:30.466771 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:53:30.466777 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:53:30.466783 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:53:30.466789 | orchestrator | 2026-03-31 02:53:30.466795 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-31 02:53:30.466801 | orchestrator | Tuesday 31 March 2026 02:53:19 +0000 (0:00:00.635) 0:02:20.709 ********* 2026-03-31 02:53:30.466826 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:53:30.466832 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:53:30.466838 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:53:30.466845 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:53:30.466850 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:53:30.466857 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:53:30.466864 | orchestrator | 2026-03-31 02:53:30.466870 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-31 02:53:30.466876 | orchestrator | Tuesday 31 March 2026 02:53:20 +0000 (0:00:00.932) 0:02:21.642 ********* 2026-03-31 02:53:30.466883 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:53:30.466889 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:53:30.466895 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:53:30.466902 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:53:30.466909 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:53:30.466913 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:53:30.466918 | orchestrator | 2026-03-31 02:53:30.466922 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-31 02:53:30.466927 | orchestrator | Tuesday 31 March 2026 02:53:21 +0000 (0:00:00.673) 0:02:22.315 ********* 2026-03-31 02:53:30.466931 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:53:30.466935 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:53:30.466940 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:53:30.466944 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:53:30.466948 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:53:30.466952 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:53:30.466957 | orchestrator | 2026-03-31 02:53:30.466961 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-31 02:53:30.466965 | orchestrator | Tuesday 31 March 2026 02:53:25 +0000 (0:00:03.796) 0:02:26.111 ********* 2026-03-31 02:53:30.466969 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:53:30.466974 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:53:30.466978 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:53:30.466982 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:53:30.466987 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:53:30.466991 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:53:30.466995 | orchestrator | 2026-03-31 02:53:30.466999 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-31 02:53:30.467004 | orchestrator | Tuesday 31 March 2026 02:53:25 +0000 (0:00:00.736) 0:02:26.848 ********* 2026-03-31 02:53:30.467009 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:53:30.467015 | orchestrator | 2026-03-31 02:53:30.467019 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-31 02:53:30.467027 | orchestrator | Tuesday 31 March 2026 02:53:27 +0000 (0:00:01.501) 0:02:28.350 ********* 2026-03-31 02:53:30.467031 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:53:30.467035 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:53:30.467038 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:53:30.467042 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:53:30.467050 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:53:30.467056 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:53:30.467062 | orchestrator | 2026-03-31 02:53:30.467068 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-31 02:53:30.467074 | orchestrator | Tuesday 31 March 2026 02:53:28 +0000 (0:00:00.961) 0:02:29.311 ********* 2026-03-31 02:53:30.467081 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:53:30.467087 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:53:30.467093 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:53:30.467099 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:53:30.467105 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:53:30.467111 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:53:30.467117 | orchestrator | 2026-03-31 02:53:30.467124 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-31 02:53:30.467128 | orchestrator | Tuesday 31 March 2026 02:53:29 +0000 (0:00:00.663) 0:02:29.975 ********* 2026-03-31 02:53:30.467132 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:53:30.467135 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:53:30.467139 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:53:30.467143 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:53:30.467147 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:53:30.467150 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:53:30.467154 | orchestrator | 2026-03-31 02:53:30.467158 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-31 02:53:30.467161 | orchestrator | Tuesday 31 March 2026 02:53:30 +0000 (0:00:00.902) 0:02:30.877 ********* 2026-03-31 02:53:30.467165 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:53:30.467169 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:53:30.467178 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:53:43.479799 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:53:43.479948 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:53:43.479960 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:53:43.479969 | orchestrator | 2026-03-31 02:53:43.479978 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-31 02:53:43.479987 | orchestrator | Tuesday 31 March 2026 02:53:30 +0000 (0:00:00.686) 0:02:31.564 ********* 2026-03-31 02:53:43.479995 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:53:43.480003 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:53:43.480011 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:53:43.480019 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:53:43.480027 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:53:43.480035 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:53:43.480047 | orchestrator | 2026-03-31 02:53:43.480060 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-31 02:53:43.480073 | orchestrator | Tuesday 31 March 2026 02:53:31 +0000 (0:00:00.958) 0:02:32.522 ********* 2026-03-31 02:53:43.480086 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:53:43.480099 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:53:43.480110 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:53:43.480123 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:53:43.480135 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:53:43.480147 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:53:43.480161 | orchestrator | 2026-03-31 02:53:43.480174 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-31 02:53:43.480187 | orchestrator | Tuesday 31 March 2026 02:53:32 +0000 (0:00:00.635) 0:02:33.158 ********* 2026-03-31 02:53:43.480229 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:53:43.480238 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:53:43.480246 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:53:43.480254 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:53:43.480262 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:53:43.480270 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:53:43.480277 | orchestrator | 2026-03-31 02:53:43.480285 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-31 02:53:43.480293 | orchestrator | Tuesday 31 March 2026 02:53:33 +0000 (0:00:00.884) 0:02:34.043 ********* 2026-03-31 02:53:43.480301 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:53:43.480309 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:53:43.480316 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:53:43.480324 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:53:43.480332 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:53:43.480339 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:53:43.480347 | orchestrator | 2026-03-31 02:53:43.480355 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-31 02:53:43.480365 | orchestrator | Tuesday 31 March 2026 02:53:33 +0000 (0:00:00.677) 0:02:34.720 ********* 2026-03-31 02:53:43.480374 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:53:43.480384 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:53:43.480393 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:53:43.480402 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:53:43.480410 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:53:43.480419 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:53:43.480428 | orchestrator | 2026-03-31 02:53:43.480437 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-31 02:53:43.480446 | orchestrator | Tuesday 31 March 2026 02:53:35 +0000 (0:00:01.429) 0:02:36.150 ********* 2026-03-31 02:53:43.480455 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:53:43.480466 | orchestrator | 2026-03-31 02:53:43.480476 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-31 02:53:43.480485 | orchestrator | Tuesday 31 March 2026 02:53:36 +0000 (0:00:01.377) 0:02:37.527 ********* 2026-03-31 02:53:43.480494 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-03-31 02:53:43.480503 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-31 02:53:43.480512 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-03-31 02:53:43.480519 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-03-31 02:53:43.480527 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-03-31 02:53:43.480535 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-31 02:53:43.480542 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-31 02:53:43.480564 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-03-31 02:53:43.480572 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-31 02:53:43.480580 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-03-31 02:53:43.480588 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-31 02:53:43.480596 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-31 02:53:43.480603 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-31 02:53:43.480611 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-31 02:53:43.480619 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-31 02:53:43.480627 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-31 02:53:43.480635 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-31 02:53:43.480642 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-31 02:53:43.480650 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-31 02:53:43.480665 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-31 02:53:43.480673 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-31 02:53:43.480681 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-31 02:53:43.480689 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-31 02:53:43.480697 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-31 02:53:43.480719 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-31 02:53:43.480728 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-31 02:53:43.480736 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-31 02:53:43.480743 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-31 02:53:43.480751 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-31 02:53:43.480759 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-31 02:53:43.480766 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-31 02:53:43.480774 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-31 02:53:43.480782 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-31 02:53:43.480789 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-31 02:53:43.480797 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-31 02:53:43.480804 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-31 02:53:43.480812 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-31 02:53:43.480875 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-31 02:53:43.480883 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-31 02:53:43.480891 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-31 02:53:43.480898 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-31 02:53:43.480906 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-31 02:53:43.480914 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-31 02:53:43.480922 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-31 02:53:43.480929 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-31 02:53:43.480937 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-31 02:53:43.480944 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-31 02:53:43.480952 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-31 02:53:43.480960 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-31 02:53:43.480968 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-31 02:53:43.480975 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-31 02:53:43.480983 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-31 02:53:43.480991 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-31 02:53:43.480998 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-31 02:53:43.481006 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-31 02:53:43.481014 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-31 02:53:43.481022 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-31 02:53:43.481029 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-31 02:53:43.481037 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-31 02:53:43.481045 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-31 02:53:43.481052 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-31 02:53:43.481066 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-31 02:53:43.481074 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-31 02:53:43.481081 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-31 02:53:43.481089 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-31 02:53:43.481097 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-31 02:53:43.481104 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-31 02:53:43.481117 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-31 02:53:43.481125 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-31 02:53:43.481133 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-31 02:53:43.481140 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-31 02:53:43.481148 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-31 02:53:43.481156 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-31 02:53:43.481169 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-31 02:53:43.481183 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-31 02:53:43.481197 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-31 02:53:43.481218 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-31 02:53:43.481233 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-31 02:53:43.481247 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-31 02:53:43.481261 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-31 02:53:43.481273 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-03-31 02:53:43.481287 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-31 02:53:43.481309 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-03-31 02:53:59.086463 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-03-31 02:53:59.086562 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-31 02:53:59.086575 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-31 02:53:59.086585 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-03-31 02:53:59.086594 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-31 02:53:59.086603 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-03-31 02:53:59.086611 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-03-31 02:53:59.086620 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-03-31 02:53:59.086628 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-03-31 02:53:59.086637 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-03-31 02:53:59.086645 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-03-31 02:53:59.086654 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-03-31 02:53:59.086662 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-03-31 02:53:59.086671 | orchestrator | 2026-03-31 02:53:59.086681 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-31 02:53:59.086689 | orchestrator | Tuesday 31 March 2026 02:53:43 +0000 (0:00:06.779) 0:02:44.307 ********* 2026-03-31 02:53:59.086698 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:53:59.086708 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:53:59.086720 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:53:59.086734 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 02:53:59.086778 | orchestrator | 2026-03-31 02:53:59.086793 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-31 02:53:59.086806 | orchestrator | Tuesday 31 March 2026 02:53:44 +0000 (0:00:01.177) 0:02:45.484 ********* 2026-03-31 02:53:59.086820 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-31 02:53:59.086930 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-31 02:53:59.086948 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-31 02:53:59.086964 | orchestrator | 2026-03-31 02:53:59.086979 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-31 02:53:59.086993 | orchestrator | Tuesday 31 March 2026 02:53:45 +0000 (0:00:00.715) 0:02:46.200 ********* 2026-03-31 02:53:59.087004 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-31 02:53:59.087014 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-31 02:53:59.087024 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-31 02:53:59.087034 | orchestrator | 2026-03-31 02:53:59.087044 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-31 02:53:59.087054 | orchestrator | Tuesday 31 March 2026 02:53:46 +0000 (0:00:01.209) 0:02:47.409 ********* 2026-03-31 02:53:59.087064 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:53:59.087074 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:53:59.087084 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:53:59.087094 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:53:59.087104 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:53:59.087114 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:53:59.087124 | orchestrator | 2026-03-31 02:53:59.087134 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-31 02:53:59.087159 | orchestrator | Tuesday 31 March 2026 02:53:47 +0000 (0:00:00.916) 0:02:48.326 ********* 2026-03-31 02:53:59.087169 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:53:59.087179 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:53:59.087189 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:53:59.087199 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:53:59.087208 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:53:59.087218 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:53:59.087229 | orchestrator | 2026-03-31 02:53:59.087239 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-31 02:53:59.087249 | orchestrator | Tuesday 31 March 2026 02:53:48 +0000 (0:00:00.638) 0:02:48.965 ********* 2026-03-31 02:53:59.087259 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:53:59.087269 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:53:59.087279 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:53:59.087290 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:53:59.087299 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:53:59.087309 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:53:59.087318 | orchestrator | 2026-03-31 02:53:59.087405 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-31 02:53:59.087415 | orchestrator | Tuesday 31 March 2026 02:53:48 +0000 (0:00:00.868) 0:02:49.833 ********* 2026-03-31 02:53:59.087426 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:53:59.087434 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:53:59.087443 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:53:59.087451 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:53:59.087460 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:53:59.087468 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:53:59.087488 | orchestrator | 2026-03-31 02:53:59.087506 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-31 02:53:59.087543 | orchestrator | Tuesday 31 March 2026 02:53:49 +0000 (0:00:00.647) 0:02:50.480 ********* 2026-03-31 02:53:59.087557 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:53:59.087572 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:53:59.087588 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:53:59.087604 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:53:59.087619 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:53:59.087632 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:53:59.087641 | orchestrator | 2026-03-31 02:53:59.087650 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-31 02:53:59.087658 | orchestrator | Tuesday 31 March 2026 02:53:50 +0000 (0:00:00.924) 0:02:51.405 ********* 2026-03-31 02:53:59.087667 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:53:59.087675 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:53:59.087684 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:53:59.087692 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:53:59.087700 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:53:59.087709 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:53:59.087717 | orchestrator | 2026-03-31 02:53:59.087726 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-31 02:53:59.087734 | orchestrator | Tuesday 31 March 2026 02:53:51 +0000 (0:00:00.663) 0:02:52.069 ********* 2026-03-31 02:53:59.087742 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:53:59.087751 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:53:59.087759 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:53:59.087767 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:53:59.087776 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:53:59.087784 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:53:59.087792 | orchestrator | 2026-03-31 02:53:59.087801 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-31 02:53:59.087810 | orchestrator | Tuesday 31 March 2026 02:53:52 +0000 (0:00:00.885) 0:02:52.955 ********* 2026-03-31 02:53:59.087818 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:53:59.087826 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:53:59.087863 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:53:59.087872 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:53:59.087881 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:53:59.087889 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:53:59.087897 | orchestrator | 2026-03-31 02:53:59.087906 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-31 02:53:59.087914 | orchestrator | Tuesday 31 March 2026 02:53:52 +0000 (0:00:00.645) 0:02:53.600 ********* 2026-03-31 02:53:59.087923 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:53:59.087931 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:53:59.087940 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:53:59.087948 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:53:59.087956 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:53:59.087965 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:53:59.087973 | orchestrator | 2026-03-31 02:53:59.087982 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-31 02:53:59.087990 | orchestrator | Tuesday 31 March 2026 02:53:55 +0000 (0:00:02.896) 0:02:56.497 ********* 2026-03-31 02:53:59.087999 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:53:59.088007 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:53:59.088021 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:53:59.088034 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:53:59.088058 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:53:59.088073 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:53:59.088086 | orchestrator | 2026-03-31 02:53:59.088099 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-31 02:53:59.088123 | orchestrator | Tuesday 31 March 2026 02:53:56 +0000 (0:00:00.688) 0:02:57.186 ********* 2026-03-31 02:53:59.088138 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:53:59.088152 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:53:59.088165 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:53:59.088179 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:53:59.088192 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:53:59.088206 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:53:59.088219 | orchestrator | 2026-03-31 02:53:59.088232 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-31 02:53:59.088246 | orchestrator | Tuesday 31 March 2026 02:53:57 +0000 (0:00:00.959) 0:02:58.145 ********* 2026-03-31 02:53:59.088258 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:53:59.088272 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:53:59.088296 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:53:59.088310 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:53:59.088323 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:53:59.088336 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:53:59.088350 | orchestrator | 2026-03-31 02:53:59.088363 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-31 02:53:59.088377 | orchestrator | Tuesday 31 March 2026 02:53:57 +0000 (0:00:00.710) 0:02:58.856 ********* 2026-03-31 02:53:59.088391 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-31 02:53:59.088405 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-31 02:53:59.088420 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-31 02:53:59.088434 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:53:59.088449 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:53:59.088463 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:53:59.088477 | orchestrator | 2026-03-31 02:53:59.088492 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-31 02:53:59.088505 | orchestrator | Tuesday 31 March 2026 02:53:58 +0000 (0:00:00.975) 0:02:59.832 ********* 2026-03-31 02:53:59.088536 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-03-31 02:54:16.356899 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-03-31 02:54:16.357048 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:54:16.357070 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-03-31 02:54:16.357084 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-03-31 02:54:16.357095 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:54:16.357107 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-03-31 02:54:16.357153 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-03-31 02:54:16.357166 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:54:16.357178 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:54:16.357189 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:54:16.357200 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:54:16.357211 | orchestrator | 2026-03-31 02:54:16.357224 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-31 02:54:16.357236 | orchestrator | Tuesday 31 March 2026 02:53:59 +0000 (0:00:00.680) 0:03:00.513 ********* 2026-03-31 02:54:16.357247 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:54:16.357259 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:54:16.357269 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:54:16.357280 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:54:16.357291 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:54:16.357302 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:54:16.357312 | orchestrator | 2026-03-31 02:54:16.357324 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-31 02:54:16.357335 | orchestrator | Tuesday 31 March 2026 02:54:00 +0000 (0:00:00.968) 0:03:01.481 ********* 2026-03-31 02:54:16.357345 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:54:16.357357 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:54:16.357370 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:54:16.357383 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:54:16.357395 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:54:16.357408 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:54:16.357420 | orchestrator | 2026-03-31 02:54:16.357433 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-31 02:54:16.357448 | orchestrator | Tuesday 31 March 2026 02:54:01 +0000 (0:00:00.630) 0:03:02.111 ********* 2026-03-31 02:54:16.357475 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:54:16.357513 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:54:16.357550 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:54:16.357568 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:54:16.357585 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:54:16.357604 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:54:16.357621 | orchestrator | 2026-03-31 02:54:16.357640 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-31 02:54:16.357659 | orchestrator | Tuesday 31 March 2026 02:54:02 +0000 (0:00:00.976) 0:03:03.088 ********* 2026-03-31 02:54:16.357677 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:54:16.357697 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:54:16.357717 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:54:16.357736 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:54:16.357754 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:54:16.357770 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:54:16.357781 | orchestrator | 2026-03-31 02:54:16.357792 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-31 02:54:16.357803 | orchestrator | Tuesday 31 March 2026 02:54:03 +0000 (0:00:00.917) 0:03:04.005 ********* 2026-03-31 02:54:16.357813 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:54:16.357824 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:54:16.357834 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:54:16.357875 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:54:16.357887 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:54:16.357898 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:54:16.357920 | orchestrator | 2026-03-31 02:54:16.357931 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-31 02:54:16.357942 | orchestrator | Tuesday 31 March 2026 02:54:03 +0000 (0:00:00.694) 0:03:04.700 ********* 2026-03-31 02:54:16.357954 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:54:16.357966 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:54:16.357977 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:54:16.358086 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:54:16.358112 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:54:16.358131 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:54:16.358163 | orchestrator | 2026-03-31 02:54:16.358181 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-31 02:54:16.358200 | orchestrator | Tuesday 31 March 2026 02:54:04 +0000 (0:00:00.924) 0:03:05.624 ********* 2026-03-31 02:54:16.358212 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-31 02:54:16.358223 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-31 02:54:16.358233 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-31 02:54:16.358245 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:54:16.358255 | orchestrator | 2026-03-31 02:54:16.358266 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-31 02:54:16.358277 | orchestrator | Tuesday 31 March 2026 02:54:05 +0000 (0:00:00.474) 0:03:06.099 ********* 2026-03-31 02:54:16.358288 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-31 02:54:16.358299 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-31 02:54:16.358309 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-31 02:54:16.358320 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:54:16.358331 | orchestrator | 2026-03-31 02:54:16.358342 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-31 02:54:16.358352 | orchestrator | Tuesday 31 March 2026 02:54:05 +0000 (0:00:00.472) 0:03:06.572 ********* 2026-03-31 02:54:16.358363 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-31 02:54:16.358374 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-31 02:54:16.358384 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-31 02:54:16.358395 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:54:16.358406 | orchestrator | 2026-03-31 02:54:16.358416 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-31 02:54:16.358427 | orchestrator | Tuesday 31 March 2026 02:54:06 +0000 (0:00:00.453) 0:03:07.026 ********* 2026-03-31 02:54:16.358438 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:54:16.358449 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:54:16.358459 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:54:16.358470 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:54:16.358480 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:54:16.358491 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:54:16.358502 | orchestrator | 2026-03-31 02:54:16.358512 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-31 02:54:16.358523 | orchestrator | Tuesday 31 March 2026 02:54:06 +0000 (0:00:00.687) 0:03:07.713 ********* 2026-03-31 02:54:16.358534 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-31 02:54:16.358545 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-31 02:54:16.358556 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-31 02:54:16.358566 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-31 02:54:16.358577 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:54:16.358588 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-31 02:54:16.358598 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:54:16.358609 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-31 02:54:16.358620 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:54:16.358630 | orchestrator | 2026-03-31 02:54:16.358641 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-31 02:54:16.358662 | orchestrator | Tuesday 31 March 2026 02:54:08 +0000 (0:00:01.909) 0:03:09.623 ********* 2026-03-31 02:54:16.358672 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:54:16.358688 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:54:16.358706 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:54:16.358723 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:54:16.358741 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:54:16.358759 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:54:16.358777 | orchestrator | 2026-03-31 02:54:16.358796 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-31 02:54:16.358815 | orchestrator | Tuesday 31 March 2026 02:54:11 +0000 (0:00:02.759) 0:03:12.382 ********* 2026-03-31 02:54:16.358834 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:54:16.358920 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:54:16.358940 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:54:16.358966 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:54:16.358990 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:54:16.359009 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:54:16.359030 | orchestrator | 2026-03-31 02:54:16.359049 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-31 02:54:16.359069 | orchestrator | Tuesday 31 March 2026 02:54:12 +0000 (0:00:01.032) 0:03:13.415 ********* 2026-03-31 02:54:16.359088 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:54:16.359108 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:54:16.359127 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:54:16.359148 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:54:16.359167 | orchestrator | 2026-03-31 02:54:16.359188 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-31 02:54:16.359208 | orchestrator | Tuesday 31 March 2026 02:54:13 +0000 (0:00:01.181) 0:03:14.596 ********* 2026-03-31 02:54:16.359228 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:54:16.359240 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:54:16.359250 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:54:16.359261 | orchestrator | 2026-03-31 02:54:16.359271 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-31 02:54:16.359282 | orchestrator | Tuesday 31 March 2026 02:54:14 +0000 (0:00:00.357) 0:03:14.954 ********* 2026-03-31 02:54:16.359293 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:54:16.359304 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:54:16.359314 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:54:16.359325 | orchestrator | 2026-03-31 02:54:16.359336 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-31 02:54:16.359346 | orchestrator | Tuesday 31 March 2026 02:54:15 +0000 (0:00:01.591) 0:03:16.546 ********* 2026-03-31 02:54:16.359372 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-31 02:54:32.628796 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-31 02:54:32.628955 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-31 02:54:32.628975 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:54:32.628989 | orchestrator | 2026-03-31 02:54:32.629003 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-31 02:54:32.629018 | orchestrator | Tuesday 31 March 2026 02:54:16 +0000 (0:00:00.668) 0:03:17.214 ********* 2026-03-31 02:54:32.629032 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:54:32.629047 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:54:32.629061 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:54:32.629075 | orchestrator | 2026-03-31 02:54:32.629130 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-31 02:54:32.629147 | orchestrator | Tuesday 31 March 2026 02:54:16 +0000 (0:00:00.361) 0:03:17.575 ********* 2026-03-31 02:54:32.629162 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:54:32.629176 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:54:32.629190 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:54:32.629228 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 02:54:32.629237 | orchestrator | 2026-03-31 02:54:32.629260 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-31 02:54:32.629286 | orchestrator | Tuesday 31 March 2026 02:54:17 +0000 (0:00:01.192) 0:03:18.768 ********* 2026-03-31 02:54:32.629301 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-31 02:54:32.629315 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-31 02:54:32.629328 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-31 02:54:32.629341 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:54:32.629355 | orchestrator | 2026-03-31 02:54:32.629370 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-31 02:54:32.629384 | orchestrator | Tuesday 31 March 2026 02:54:18 +0000 (0:00:00.448) 0:03:19.217 ********* 2026-03-31 02:54:32.629397 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:54:32.629406 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:54:32.629415 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:54:32.629423 | orchestrator | 2026-03-31 02:54:32.629432 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-31 02:54:32.629441 | orchestrator | Tuesday 31 March 2026 02:54:18 +0000 (0:00:00.340) 0:03:19.558 ********* 2026-03-31 02:54:32.629450 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:54:32.629459 | orchestrator | 2026-03-31 02:54:32.629468 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-31 02:54:32.629477 | orchestrator | Tuesday 31 March 2026 02:54:18 +0000 (0:00:00.226) 0:03:19.785 ********* 2026-03-31 02:54:32.629486 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:54:32.629495 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:54:32.629504 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:54:32.629513 | orchestrator | 2026-03-31 02:54:32.629526 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-31 02:54:32.629540 | orchestrator | Tuesday 31 March 2026 02:54:19 +0000 (0:00:00.344) 0:03:20.130 ********* 2026-03-31 02:54:32.629553 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:54:32.629566 | orchestrator | 2026-03-31 02:54:32.629579 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-31 02:54:32.629591 | orchestrator | Tuesday 31 March 2026 02:54:19 +0000 (0:00:00.743) 0:03:20.873 ********* 2026-03-31 02:54:32.629603 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:54:32.629616 | orchestrator | 2026-03-31 02:54:32.629630 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-31 02:54:32.629643 | orchestrator | Tuesday 31 March 2026 02:54:20 +0000 (0:00:00.256) 0:03:21.130 ********* 2026-03-31 02:54:32.629656 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:54:32.629670 | orchestrator | 2026-03-31 02:54:32.629683 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-31 02:54:32.629698 | orchestrator | Tuesday 31 March 2026 02:54:20 +0000 (0:00:00.142) 0:03:21.272 ********* 2026-03-31 02:54:32.629726 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:54:32.629736 | orchestrator | 2026-03-31 02:54:32.629743 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-31 02:54:32.629751 | orchestrator | Tuesday 31 March 2026 02:54:20 +0000 (0:00:00.249) 0:03:21.522 ********* 2026-03-31 02:54:32.629759 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:54:32.629767 | orchestrator | 2026-03-31 02:54:32.629774 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-31 02:54:32.629782 | orchestrator | Tuesday 31 March 2026 02:54:20 +0000 (0:00:00.249) 0:03:21.772 ********* 2026-03-31 02:54:32.629790 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-31 02:54:32.629797 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-31 02:54:32.629805 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-31 02:54:32.629821 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:54:32.629828 | orchestrator | 2026-03-31 02:54:32.629836 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-31 02:54:32.629844 | orchestrator | Tuesday 31 March 2026 02:54:21 +0000 (0:00:00.459) 0:03:22.231 ********* 2026-03-31 02:54:32.629852 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:54:32.629882 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:54:32.629889 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:54:32.629897 | orchestrator | 2026-03-31 02:54:32.629905 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-31 02:54:32.629913 | orchestrator | Tuesday 31 March 2026 02:54:21 +0000 (0:00:00.337) 0:03:22.569 ********* 2026-03-31 02:54:32.629925 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:54:32.629938 | orchestrator | 2026-03-31 02:54:32.629952 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-31 02:54:32.629965 | orchestrator | Tuesday 31 March 2026 02:54:21 +0000 (0:00:00.254) 0:03:22.823 ********* 2026-03-31 02:54:32.629978 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:54:32.629992 | orchestrator | 2026-03-31 02:54:32.630080 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-31 02:54:32.630093 | orchestrator | Tuesday 31 March 2026 02:54:22 +0000 (0:00:00.221) 0:03:23.045 ********* 2026-03-31 02:54:32.630101 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:54:32.630109 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:54:32.630117 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:54:32.630124 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 02:54:32.630132 | orchestrator | 2026-03-31 02:54:32.630140 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-31 02:54:32.630148 | orchestrator | Tuesday 31 March 2026 02:54:23 +0000 (0:00:01.303) 0:03:24.349 ********* 2026-03-31 02:54:32.630182 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:54:32.630191 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:54:32.630198 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:54:32.630206 | orchestrator | 2026-03-31 02:54:32.630214 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-31 02:54:32.630222 | orchestrator | Tuesday 31 March 2026 02:54:23 +0000 (0:00:00.348) 0:03:24.698 ********* 2026-03-31 02:54:32.630230 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:54:32.630237 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:54:32.630245 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:54:32.630253 | orchestrator | 2026-03-31 02:54:32.630260 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-31 02:54:32.630268 | orchestrator | Tuesday 31 March 2026 02:54:25 +0000 (0:00:01.533) 0:03:26.231 ********* 2026-03-31 02:54:32.630276 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-31 02:54:32.630283 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-31 02:54:32.630291 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-31 02:54:32.630299 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:54:32.630306 | orchestrator | 2026-03-31 02:54:32.630314 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-31 02:54:32.630322 | orchestrator | Tuesday 31 March 2026 02:54:25 +0000 (0:00:00.627) 0:03:26.859 ********* 2026-03-31 02:54:32.630330 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:54:32.630337 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:54:32.630345 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:54:32.630352 | orchestrator | 2026-03-31 02:54:32.630360 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-31 02:54:32.630368 | orchestrator | Tuesday 31 March 2026 02:54:26 +0000 (0:00:00.341) 0:03:27.201 ********* 2026-03-31 02:54:32.630376 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:54:32.630383 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:54:32.630391 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:54:32.630414 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 02:54:32.630422 | orchestrator | 2026-03-31 02:54:32.630430 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-31 02:54:32.630438 | orchestrator | Tuesday 31 March 2026 02:54:27 +0000 (0:00:01.131) 0:03:28.332 ********* 2026-03-31 02:54:32.630445 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:54:32.630453 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:54:32.630461 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:54:32.630468 | orchestrator | 2026-03-31 02:54:32.630476 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-31 02:54:32.630484 | orchestrator | Tuesday 31 March 2026 02:54:27 +0000 (0:00:00.369) 0:03:28.702 ********* 2026-03-31 02:54:32.630492 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:54:32.630499 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:54:32.630507 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:54:32.630515 | orchestrator | 2026-03-31 02:54:32.630522 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-31 02:54:32.630530 | orchestrator | Tuesday 31 March 2026 02:54:29 +0000 (0:00:01.288) 0:03:29.990 ********* 2026-03-31 02:54:32.630538 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-31 02:54:32.630546 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-31 02:54:32.630559 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-31 02:54:32.630567 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:54:32.630575 | orchestrator | 2026-03-31 02:54:32.630583 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-31 02:54:32.630591 | orchestrator | Tuesday 31 March 2026 02:54:30 +0000 (0:00:00.952) 0:03:30.943 ********* 2026-03-31 02:54:32.630598 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:54:32.630606 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:54:32.630614 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:54:32.630622 | orchestrator | 2026-03-31 02:54:32.630630 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-31 02:54:32.630638 | orchestrator | Tuesday 31 March 2026 02:54:30 +0000 (0:00:00.576) 0:03:31.520 ********* 2026-03-31 02:54:32.630645 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:54:32.630653 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:54:32.630661 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:54:32.630668 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:54:32.630676 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:54:32.630684 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:54:32.630691 | orchestrator | 2026-03-31 02:54:32.630699 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-31 02:54:32.630707 | orchestrator | Tuesday 31 March 2026 02:54:31 +0000 (0:00:00.627) 0:03:32.148 ********* 2026-03-31 02:54:32.630714 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:54:32.630722 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:54:32.630730 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:54:32.630737 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:54:32.630745 | orchestrator | 2026-03-31 02:54:32.630759 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-31 02:54:32.630773 | orchestrator | Tuesday 31 March 2026 02:54:32 +0000 (0:00:01.137) 0:03:33.286 ********* 2026-03-31 02:54:32.630795 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:54:50.804844 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:54:50.804982 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:54:50.804995 | orchestrator | 2026-03-31 02:54:50.805004 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-31 02:54:50.805012 | orchestrator | Tuesday 31 March 2026 02:54:32 +0000 (0:00:00.367) 0:03:33.653 ********* 2026-03-31 02:54:50.805019 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:54:50.805048 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:54:50.805052 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:54:50.805056 | orchestrator | 2026-03-31 02:54:50.805060 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-31 02:54:50.805064 | orchestrator | Tuesday 31 March 2026 02:54:34 +0000 (0:00:01.222) 0:03:34.875 ********* 2026-03-31 02:54:50.805069 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-31 02:54:50.805073 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-31 02:54:50.805077 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-31 02:54:50.805081 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:54:50.805084 | orchestrator | 2026-03-31 02:54:50.805088 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-31 02:54:50.805092 | orchestrator | Tuesday 31 March 2026 02:54:35 +0000 (0:00:01.172) 0:03:36.048 ********* 2026-03-31 02:54:50.805095 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:54:50.805099 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:54:50.805103 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:54:50.805106 | orchestrator | 2026-03-31 02:54:50.805110 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-03-31 02:54:50.805114 | orchestrator | 2026-03-31 02:54:50.805117 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-31 02:54:50.805122 | orchestrator | Tuesday 31 March 2026 02:54:35 +0000 (0:00:00.648) 0:03:36.697 ********* 2026-03-31 02:54:50.805130 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:54:50.805137 | orchestrator | 2026-03-31 02:54:50.805144 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-31 02:54:50.805149 | orchestrator | Tuesday 31 March 2026 02:54:36 +0000 (0:00:00.925) 0:03:37.623 ********* 2026-03-31 02:54:50.805155 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:54:50.805161 | orchestrator | 2026-03-31 02:54:50.805166 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-31 02:54:50.805171 | orchestrator | Tuesday 31 March 2026 02:54:37 +0000 (0:00:00.620) 0:03:38.244 ********* 2026-03-31 02:54:50.805177 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:54:50.805183 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:54:50.805188 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:54:50.805193 | orchestrator | 2026-03-31 02:54:50.805199 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-31 02:54:50.805204 | orchestrator | Tuesday 31 March 2026 02:54:38 +0000 (0:00:00.724) 0:03:38.968 ********* 2026-03-31 02:54:50.805209 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:54:50.805215 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:54:50.805220 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:54:50.805226 | orchestrator | 2026-03-31 02:54:50.805232 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-31 02:54:50.805238 | orchestrator | Tuesday 31 March 2026 02:54:38 +0000 (0:00:00.611) 0:03:39.580 ********* 2026-03-31 02:54:50.805244 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:54:50.805250 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:54:50.805255 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:54:50.805261 | orchestrator | 2026-03-31 02:54:50.805267 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-31 02:54:50.805273 | orchestrator | Tuesday 31 March 2026 02:54:39 +0000 (0:00:00.364) 0:03:39.945 ********* 2026-03-31 02:54:50.805278 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:54:50.805284 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:54:50.805303 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:54:50.805309 | orchestrator | 2026-03-31 02:54:50.805315 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-31 02:54:50.805321 | orchestrator | Tuesday 31 March 2026 02:54:39 +0000 (0:00:00.392) 0:03:40.337 ********* 2026-03-31 02:54:50.805335 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:54:50.805342 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:54:50.805348 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:54:50.805355 | orchestrator | 2026-03-31 02:54:50.805361 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-31 02:54:50.805367 | orchestrator | Tuesday 31 March 2026 02:54:40 +0000 (0:00:00.764) 0:03:41.102 ********* 2026-03-31 02:54:50.805371 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:54:50.805375 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:54:50.805378 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:54:50.805382 | orchestrator | 2026-03-31 02:54:50.805386 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-31 02:54:50.805389 | orchestrator | Tuesday 31 March 2026 02:54:40 +0000 (0:00:00.597) 0:03:41.699 ********* 2026-03-31 02:54:50.805393 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:54:50.805397 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:54:50.805400 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:54:50.805404 | orchestrator | 2026-03-31 02:54:50.805408 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-31 02:54:50.805412 | orchestrator | Tuesday 31 March 2026 02:54:41 +0000 (0:00:00.373) 0:03:42.073 ********* 2026-03-31 02:54:50.805415 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:54:50.805419 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:54:50.805423 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:54:50.805426 | orchestrator | 2026-03-31 02:54:50.805430 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-31 02:54:50.805434 | orchestrator | Tuesday 31 March 2026 02:54:42 +0000 (0:00:00.802) 0:03:42.876 ********* 2026-03-31 02:54:50.805438 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:54:50.805441 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:54:50.805445 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:54:50.805449 | orchestrator | 2026-03-31 02:54:50.805465 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-31 02:54:50.805469 | orchestrator | Tuesday 31 March 2026 02:54:42 +0000 (0:00:00.778) 0:03:43.654 ********* 2026-03-31 02:54:50.805473 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:54:50.805477 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:54:50.805481 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:54:50.805484 | orchestrator | 2026-03-31 02:54:50.805488 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-31 02:54:50.805492 | orchestrator | Tuesday 31 March 2026 02:54:43 +0000 (0:00:00.585) 0:03:44.240 ********* 2026-03-31 02:54:50.805495 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:54:50.805499 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:54:50.805503 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:54:50.805506 | orchestrator | 2026-03-31 02:54:50.805510 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-31 02:54:50.805514 | orchestrator | Tuesday 31 March 2026 02:54:43 +0000 (0:00:00.361) 0:03:44.601 ********* 2026-03-31 02:54:50.805518 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:54:50.805521 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:54:50.805525 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:54:50.805529 | orchestrator | 2026-03-31 02:54:50.805532 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-31 02:54:50.805536 | orchestrator | Tuesday 31 March 2026 02:54:44 +0000 (0:00:00.348) 0:03:44.950 ********* 2026-03-31 02:54:50.805540 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:54:50.805544 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:54:50.805547 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:54:50.805551 | orchestrator | 2026-03-31 02:54:50.805555 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-31 02:54:50.805558 | orchestrator | Tuesday 31 March 2026 02:54:44 +0000 (0:00:00.346) 0:03:45.297 ********* 2026-03-31 02:54:50.805562 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:54:50.805570 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:54:50.805574 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:54:50.805578 | orchestrator | 2026-03-31 02:54:50.805582 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-31 02:54:50.805585 | orchestrator | Tuesday 31 March 2026 02:54:45 +0000 (0:00:00.619) 0:03:45.917 ********* 2026-03-31 02:54:50.805589 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:54:50.805593 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:54:50.805596 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:54:50.805600 | orchestrator | 2026-03-31 02:54:50.805604 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-31 02:54:50.805608 | orchestrator | Tuesday 31 March 2026 02:54:45 +0000 (0:00:00.352) 0:03:46.269 ********* 2026-03-31 02:54:50.805611 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:54:50.805615 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:54:50.805619 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:54:50.805622 | orchestrator | 2026-03-31 02:54:50.805626 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-31 02:54:50.805630 | orchestrator | Tuesday 31 March 2026 02:54:45 +0000 (0:00:00.332) 0:03:46.602 ********* 2026-03-31 02:54:50.805634 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:54:50.805637 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:54:50.805641 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:54:50.805645 | orchestrator | 2026-03-31 02:54:50.805649 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-31 02:54:50.805652 | orchestrator | Tuesday 31 March 2026 02:54:46 +0000 (0:00:00.379) 0:03:46.981 ********* 2026-03-31 02:54:50.805656 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:54:50.805660 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:54:50.805663 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:54:50.805667 | orchestrator | 2026-03-31 02:54:50.805671 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-31 02:54:50.805675 | orchestrator | Tuesday 31 March 2026 02:54:46 +0000 (0:00:00.664) 0:03:47.646 ********* 2026-03-31 02:54:50.805678 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:54:50.805682 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:54:50.805686 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:54:50.805690 | orchestrator | 2026-03-31 02:54:50.805701 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-31 02:54:50.805708 | orchestrator | Tuesday 31 March 2026 02:54:47 +0000 (0:00:00.645) 0:03:48.292 ********* 2026-03-31 02:54:50.805717 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:54:50.805724 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:54:50.805729 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:54:50.805734 | orchestrator | 2026-03-31 02:54:50.805740 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-31 02:54:50.805746 | orchestrator | Tuesday 31 March 2026 02:54:47 +0000 (0:00:00.343) 0:03:48.635 ********* 2026-03-31 02:54:50.805752 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:54:50.805758 | orchestrator | 2026-03-31 02:54:50.805764 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-31 02:54:50.805770 | orchestrator | Tuesday 31 March 2026 02:54:48 +0000 (0:00:00.948) 0:03:49.584 ********* 2026-03-31 02:54:50.805776 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:54:50.805782 | orchestrator | 2026-03-31 02:54:50.805788 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-31 02:54:50.805794 | orchestrator | Tuesday 31 March 2026 02:54:48 +0000 (0:00:00.153) 0:03:49.738 ********* 2026-03-31 02:54:50.805800 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-31 02:54:50.805806 | orchestrator | 2026-03-31 02:54:50.805812 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-31 02:54:50.805819 | orchestrator | Tuesday 31 March 2026 02:54:49 +0000 (0:00:01.080) 0:03:50.819 ********* 2026-03-31 02:54:50.805831 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:54:50.805837 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:54:50.805843 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:54:50.805849 | orchestrator | 2026-03-31 02:54:50.805856 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-31 02:54:50.805860 | orchestrator | Tuesday 31 March 2026 02:54:50 +0000 (0:00:00.391) 0:03:51.210 ********* 2026-03-31 02:54:50.805901 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:56:03.880687 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:56:03.880788 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:56:03.880802 | orchestrator | 2026-03-31 02:56:03.880813 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-31 02:56:03.880825 | orchestrator | Tuesday 31 March 2026 02:54:50 +0000 (0:00:00.639) 0:03:51.849 ********* 2026-03-31 02:56:03.880835 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:56:03.880847 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:56:03.880857 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:56:03.880866 | orchestrator | 2026-03-31 02:56:03.880876 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-31 02:56:03.880886 | orchestrator | Tuesday 31 March 2026 02:54:52 +0000 (0:00:01.380) 0:03:53.230 ********* 2026-03-31 02:56:03.880896 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:56:03.880905 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:56:03.880915 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:56:03.880973 | orchestrator | 2026-03-31 02:56:03.880984 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-31 02:56:03.880994 | orchestrator | Tuesday 31 March 2026 02:54:53 +0000 (0:00:00.818) 0:03:54.048 ********* 2026-03-31 02:56:03.881003 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:56:03.881013 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:56:03.881023 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:56:03.881032 | orchestrator | 2026-03-31 02:56:03.881042 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-31 02:56:03.881052 | orchestrator | Tuesday 31 March 2026 02:54:53 +0000 (0:00:00.689) 0:03:54.737 ********* 2026-03-31 02:56:03.881061 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:56:03.881071 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:56:03.881081 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:56:03.881090 | orchestrator | 2026-03-31 02:56:03.881100 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-31 02:56:03.881109 | orchestrator | Tuesday 31 March 2026 02:54:54 +0000 (0:00:01.048) 0:03:55.786 ********* 2026-03-31 02:56:03.881119 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:56:03.881129 | orchestrator | 2026-03-31 02:56:03.881138 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-31 02:56:03.881148 | orchestrator | Tuesday 31 March 2026 02:54:56 +0000 (0:00:01.474) 0:03:57.261 ********* 2026-03-31 02:56:03.881157 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:56:03.881167 | orchestrator | 2026-03-31 02:56:03.881176 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-31 02:56:03.881186 | orchestrator | Tuesday 31 March 2026 02:54:57 +0000 (0:00:00.733) 0:03:57.994 ********* 2026-03-31 02:56:03.881196 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-31 02:56:03.881206 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 02:56:03.881215 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 02:56:03.881225 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-31 02:56:03.881235 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-31 02:56:03.881245 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-31 02:56:03.881255 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-31 02:56:03.881264 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-03-31 02:56:03.881274 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-31 02:56:03.881307 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-03-31 02:56:03.881317 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-31 02:56:03.881327 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-03-31 02:56:03.881337 | orchestrator | 2026-03-31 02:56:03.881346 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-31 02:56:03.881356 | orchestrator | Tuesday 31 March 2026 02:55:00 +0000 (0:00:03.248) 0:04:01.243 ********* 2026-03-31 02:56:03.881366 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:56:03.881375 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:56:03.881398 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:56:03.881408 | orchestrator | 2026-03-31 02:56:03.881418 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-31 02:56:03.881427 | orchestrator | Tuesday 31 March 2026 02:55:01 +0000 (0:00:01.264) 0:04:02.508 ********* 2026-03-31 02:56:03.881437 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:56:03.881447 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:56:03.881456 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:56:03.881466 | orchestrator | 2026-03-31 02:56:03.881475 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-31 02:56:03.881485 | orchestrator | Tuesday 31 March 2026 02:55:02 +0000 (0:00:00.651) 0:04:03.160 ********* 2026-03-31 02:56:03.881494 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:56:03.881504 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:56:03.881513 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:56:03.881523 | orchestrator | 2026-03-31 02:56:03.881532 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-31 02:56:03.881542 | orchestrator | Tuesday 31 March 2026 02:55:02 +0000 (0:00:00.380) 0:04:03.540 ********* 2026-03-31 02:56:03.881551 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:56:03.881561 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:56:03.881570 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:56:03.881580 | orchestrator | 2026-03-31 02:56:03.881589 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-31 02:56:03.881599 | orchestrator | Tuesday 31 March 2026 02:55:04 +0000 (0:00:01.564) 0:04:05.105 ********* 2026-03-31 02:56:03.881608 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:56:03.881618 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:56:03.881627 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:56:03.881637 | orchestrator | 2026-03-31 02:56:03.881646 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-31 02:56:03.881655 | orchestrator | Tuesday 31 March 2026 02:55:05 +0000 (0:00:01.312) 0:04:06.418 ********* 2026-03-31 02:56:03.881665 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:56:03.881674 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:56:03.881698 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:56:03.881708 | orchestrator | 2026-03-31 02:56:03.881718 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-31 02:56:03.881727 | orchestrator | Tuesday 31 March 2026 02:55:06 +0000 (0:00:00.647) 0:04:07.066 ********* 2026-03-31 02:56:03.881737 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:56:03.881747 | orchestrator | 2026-03-31 02:56:03.881756 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-31 02:56:03.881766 | orchestrator | Tuesday 31 March 2026 02:55:06 +0000 (0:00:00.599) 0:04:07.665 ********* 2026-03-31 02:56:03.881775 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:56:03.881785 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:56:03.881794 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:56:03.881804 | orchestrator | 2026-03-31 02:56:03.881813 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-31 02:56:03.881823 | orchestrator | Tuesday 31 March 2026 02:55:07 +0000 (0:00:00.316) 0:04:07.982 ********* 2026-03-31 02:56:03.881832 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:56:03.881855 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:56:03.881864 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:56:03.881874 | orchestrator | 2026-03-31 02:56:03.881883 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-31 02:56:03.881893 | orchestrator | Tuesday 31 March 2026 02:55:07 +0000 (0:00:00.598) 0:04:08.580 ********* 2026-03-31 02:56:03.881902 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:56:03.881912 | orchestrator | 2026-03-31 02:56:03.881942 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-31 02:56:03.881952 | orchestrator | Tuesday 31 March 2026 02:55:08 +0000 (0:00:00.585) 0:04:09.166 ********* 2026-03-31 02:56:03.881961 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:56:03.881971 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:56:03.881980 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:56:03.881990 | orchestrator | 2026-03-31 02:56:03.881999 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-31 02:56:03.882009 | orchestrator | Tuesday 31 March 2026 02:55:10 +0000 (0:00:01.853) 0:04:11.019 ********* 2026-03-31 02:56:03.882070 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:56:03.882080 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:56:03.882090 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:56:03.882100 | orchestrator | 2026-03-31 02:56:03.882109 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-31 02:56:03.882119 | orchestrator | Tuesday 31 March 2026 02:55:11 +0000 (0:00:01.548) 0:04:12.567 ********* 2026-03-31 02:56:03.882129 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:56:03.882138 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:56:03.882148 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:56:03.882157 | orchestrator | 2026-03-31 02:56:03.882167 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-31 02:56:03.882177 | orchestrator | Tuesday 31 March 2026 02:55:13 +0000 (0:00:01.905) 0:04:14.473 ********* 2026-03-31 02:56:03.882186 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:56:03.882196 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:56:03.882205 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:56:03.882215 | orchestrator | 2026-03-31 02:56:03.882224 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-31 02:56:03.882234 | orchestrator | Tuesday 31 March 2026 02:55:15 +0000 (0:00:01.980) 0:04:16.453 ********* 2026-03-31 02:56:03.882243 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:56:03.882253 | orchestrator | 2026-03-31 02:56:03.882262 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-31 02:56:03.882272 | orchestrator | Tuesday 31 March 2026 02:55:16 +0000 (0:00:00.873) 0:04:17.327 ********* 2026-03-31 02:56:03.882287 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-03-31 02:56:03.882297 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:56:03.882307 | orchestrator | 2026-03-31 02:56:03.882316 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-31 02:56:03.882326 | orchestrator | Tuesday 31 March 2026 02:55:38 +0000 (0:00:22.032) 0:04:39.359 ********* 2026-03-31 02:56:03.882336 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:56:03.882345 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:56:03.882355 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:56:03.882365 | orchestrator | 2026-03-31 02:56:03.882374 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-31 02:56:03.882384 | orchestrator | Tuesday 31 March 2026 02:55:47 +0000 (0:00:09.194) 0:04:48.554 ********* 2026-03-31 02:56:03.882394 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:56:03.882403 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:56:03.882413 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:56:03.882436 | orchestrator | 2026-03-31 02:56:03.882446 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-31 02:56:03.882456 | orchestrator | Tuesday 31 March 2026 02:55:48 +0000 (0:00:00.349) 0:04:48.904 ********* 2026-03-31 02:56:03.882468 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5c5dfbaa5221eb5107cde056a0d3a74be7d6d57d'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-31 02:56:03.882489 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5c5dfbaa5221eb5107cde056a0d3a74be7d6d57d'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-31 02:56:18.261758 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5c5dfbaa5221eb5107cde056a0d3a74be7d6d57d'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-31 02:56:18.261897 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5c5dfbaa5221eb5107cde056a0d3a74be7d6d57d'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-31 02:56:18.261922 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5c5dfbaa5221eb5107cde056a0d3a74be7d6d57d'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-31 02:56:18.262164 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5c5dfbaa5221eb5107cde056a0d3a74be7d6d57d'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__5c5dfbaa5221eb5107cde056a0d3a74be7d6d57d'}])  2026-03-31 02:56:18.262191 | orchestrator | 2026-03-31 02:56:18.262203 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-31 02:56:18.262215 | orchestrator | Tuesday 31 March 2026 02:56:03 +0000 (0:00:15.832) 0:05:04.736 ********* 2026-03-31 02:56:18.262225 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:56:18.262236 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:56:18.262246 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:56:18.262255 | orchestrator | 2026-03-31 02:56:18.262265 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-31 02:56:18.262275 | orchestrator | Tuesday 31 March 2026 02:56:04 +0000 (0:00:00.378) 0:05:05.114 ********* 2026-03-31 02:56:18.262285 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:56:18.262295 | orchestrator | 2026-03-31 02:56:18.262304 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-31 02:56:18.262314 | orchestrator | Tuesday 31 March 2026 02:56:05 +0000 (0:00:00.895) 0:05:06.010 ********* 2026-03-31 02:56:18.262324 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:56:18.262334 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:56:18.262344 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:56:18.262354 | orchestrator | 2026-03-31 02:56:18.262364 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-31 02:56:18.262399 | orchestrator | Tuesday 31 March 2026 02:56:05 +0000 (0:00:00.369) 0:05:06.379 ********* 2026-03-31 02:56:18.262423 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:56:18.262432 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:56:18.262442 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:56:18.262454 | orchestrator | 2026-03-31 02:56:18.262471 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-31 02:56:18.262496 | orchestrator | Tuesday 31 March 2026 02:56:05 +0000 (0:00:00.388) 0:05:06.768 ********* 2026-03-31 02:56:18.262513 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-31 02:56:18.262529 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-31 02:56:18.262544 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-31 02:56:18.262560 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:56:18.262576 | orchestrator | 2026-03-31 02:56:18.262592 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-31 02:56:18.262607 | orchestrator | Tuesday 31 March 2026 02:56:07 +0000 (0:00:01.188) 0:05:07.957 ********* 2026-03-31 02:56:18.262623 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:56:18.262638 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:56:18.262653 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:56:18.262670 | orchestrator | 2026-03-31 02:56:18.262687 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-03-31 02:56:18.262702 | orchestrator | 2026-03-31 02:56:18.262717 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-31 02:56:18.262732 | orchestrator | Tuesday 31 March 2026 02:56:08 +0000 (0:00:01.089) 0:05:09.046 ********* 2026-03-31 02:56:18.262750 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:56:18.262768 | orchestrator | 2026-03-31 02:56:18.262786 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-31 02:56:18.262801 | orchestrator | Tuesday 31 March 2026 02:56:08 +0000 (0:00:00.574) 0:05:09.620 ********* 2026-03-31 02:56:18.262819 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:56:18.262834 | orchestrator | 2026-03-31 02:56:18.262851 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-31 02:56:18.262896 | orchestrator | Tuesday 31 March 2026 02:56:09 +0000 (0:00:01.014) 0:05:10.635 ********* 2026-03-31 02:56:18.262914 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:56:18.262931 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:56:18.263037 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:56:18.263054 | orchestrator | 2026-03-31 02:56:18.263072 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-31 02:56:18.263089 | orchestrator | Tuesday 31 March 2026 02:56:10 +0000 (0:00:00.783) 0:05:11.419 ********* 2026-03-31 02:56:18.263106 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:56:18.263122 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:56:18.263138 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:56:18.263155 | orchestrator | 2026-03-31 02:56:18.263170 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-31 02:56:18.263185 | orchestrator | Tuesday 31 March 2026 02:56:10 +0000 (0:00:00.348) 0:05:11.767 ********* 2026-03-31 02:56:18.263201 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:56:18.263216 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:56:18.263232 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:56:18.263248 | orchestrator | 2026-03-31 02:56:18.263263 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-31 02:56:18.263277 | orchestrator | Tuesday 31 March 2026 02:56:11 +0000 (0:00:00.760) 0:05:12.528 ********* 2026-03-31 02:56:18.263292 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:56:18.263310 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:56:18.263346 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:56:18.263364 | orchestrator | 2026-03-31 02:56:18.263380 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-31 02:56:18.263396 | orchestrator | Tuesday 31 March 2026 02:56:12 +0000 (0:00:00.407) 0:05:12.936 ********* 2026-03-31 02:56:18.263412 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:56:18.263429 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:56:18.263444 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:56:18.263460 | orchestrator | 2026-03-31 02:56:18.263476 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-31 02:56:18.263492 | orchestrator | Tuesday 31 March 2026 02:56:12 +0000 (0:00:00.879) 0:05:13.816 ********* 2026-03-31 02:56:18.263510 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:56:18.263524 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:56:18.263538 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:56:18.263553 | orchestrator | 2026-03-31 02:56:18.263569 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-31 02:56:18.263585 | orchestrator | Tuesday 31 March 2026 02:56:13 +0000 (0:00:00.384) 0:05:14.201 ********* 2026-03-31 02:56:18.263600 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:56:18.263616 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:56:18.263632 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:56:18.263648 | orchestrator | 2026-03-31 02:56:18.263664 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-31 02:56:18.263680 | orchestrator | Tuesday 31 March 2026 02:56:13 +0000 (0:00:00.632) 0:05:14.833 ********* 2026-03-31 02:56:18.263697 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:56:18.263713 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:56:18.263729 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:56:18.263745 | orchestrator | 2026-03-31 02:56:18.263761 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-31 02:56:18.263776 | orchestrator | Tuesday 31 March 2026 02:56:14 +0000 (0:00:00.766) 0:05:15.600 ********* 2026-03-31 02:56:18.263792 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:56:18.263808 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:56:18.263823 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:56:18.263839 | orchestrator | 2026-03-31 02:56:18.263857 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-31 02:56:18.263873 | orchestrator | Tuesday 31 March 2026 02:56:15 +0000 (0:00:00.733) 0:05:16.334 ********* 2026-03-31 02:56:18.263889 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:56:18.263905 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:56:18.263965 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:56:18.263985 | orchestrator | 2026-03-31 02:56:18.264001 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-31 02:56:18.264017 | orchestrator | Tuesday 31 March 2026 02:56:15 +0000 (0:00:00.377) 0:05:16.712 ********* 2026-03-31 02:56:18.264032 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:56:18.264047 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:56:18.264062 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:56:18.264077 | orchestrator | 2026-03-31 02:56:18.264094 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-31 02:56:18.264110 | orchestrator | Tuesday 31 March 2026 02:56:16 +0000 (0:00:00.648) 0:05:17.361 ********* 2026-03-31 02:56:18.264125 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:56:18.264141 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:56:18.264157 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:56:18.264174 | orchestrator | 2026-03-31 02:56:18.264190 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-31 02:56:18.264206 | orchestrator | Tuesday 31 March 2026 02:56:16 +0000 (0:00:00.390) 0:05:17.752 ********* 2026-03-31 02:56:18.264223 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:56:18.264238 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:56:18.264255 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:56:18.264271 | orchestrator | 2026-03-31 02:56:18.264304 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-31 02:56:18.264321 | orchestrator | Tuesday 31 March 2026 02:56:17 +0000 (0:00:00.349) 0:05:18.102 ********* 2026-03-31 02:56:18.264337 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:56:18.264353 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:56:18.264365 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:56:18.264375 | orchestrator | 2026-03-31 02:56:18.264384 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-31 02:56:18.264394 | orchestrator | Tuesday 31 March 2026 02:56:17 +0000 (0:00:00.376) 0:05:18.478 ********* 2026-03-31 02:56:18.264403 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:56:18.264413 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:56:18.264422 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:56:18.264432 | orchestrator | 2026-03-31 02:56:18.264441 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-31 02:56:18.264469 | orchestrator | Tuesday 31 March 2026 02:56:18 +0000 (0:00:00.631) 0:05:19.110 ********* 2026-03-31 02:57:24.713680 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:57:24.713783 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:57:24.713795 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:57:24.713803 | orchestrator | 2026-03-31 02:57:24.713813 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-31 02:57:24.713822 | orchestrator | Tuesday 31 March 2026 02:56:18 +0000 (0:00:00.417) 0:05:19.527 ********* 2026-03-31 02:57:24.713831 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:57:24.713840 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:57:24.713848 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:57:24.713856 | orchestrator | 2026-03-31 02:57:24.713864 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-31 02:57:24.713872 | orchestrator | Tuesday 31 March 2026 02:56:19 +0000 (0:00:00.379) 0:05:19.906 ********* 2026-03-31 02:57:24.713880 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:57:24.713888 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:57:24.713896 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:57:24.713904 | orchestrator | 2026-03-31 02:57:24.713912 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-31 02:57:24.713920 | orchestrator | Tuesday 31 March 2026 02:56:19 +0000 (0:00:00.375) 0:05:20.282 ********* 2026-03-31 02:57:24.713927 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:57:24.713935 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:57:24.713943 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:57:24.713950 | orchestrator | 2026-03-31 02:57:24.713958 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-31 02:57:24.713966 | orchestrator | Tuesday 31 March 2026 02:56:20 +0000 (0:00:00.931) 0:05:21.214 ********* 2026-03-31 02:57:24.713974 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-31 02:57:24.713982 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 02:57:24.714054 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 02:57:24.714064 | orchestrator | 2026-03-31 02:57:24.714071 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-31 02:57:24.714079 | orchestrator | Tuesday 31 March 2026 02:56:21 +0000 (0:00:00.697) 0:05:21.911 ********* 2026-03-31 02:57:24.714087 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:57:24.714096 | orchestrator | 2026-03-31 02:57:24.714104 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-31 02:57:24.714112 | orchestrator | Tuesday 31 March 2026 02:56:22 +0000 (0:00:01.031) 0:05:22.943 ********* 2026-03-31 02:57:24.714120 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:57:24.714128 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:57:24.714135 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:57:24.714143 | orchestrator | 2026-03-31 02:57:24.714151 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-31 02:57:24.714185 | orchestrator | Tuesday 31 March 2026 02:56:22 +0000 (0:00:00.812) 0:05:23.756 ********* 2026-03-31 02:57:24.714193 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:57:24.714201 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:57:24.714209 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:57:24.714217 | orchestrator | 2026-03-31 02:57:24.714224 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-31 02:57:24.714233 | orchestrator | Tuesday 31 March 2026 02:56:23 +0000 (0:00:00.348) 0:05:24.104 ********* 2026-03-31 02:57:24.714243 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-31 02:57:24.714252 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-31 02:57:24.714261 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-31 02:57:24.714270 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-03-31 02:57:24.714279 | orchestrator | 2026-03-31 02:57:24.714300 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-31 02:57:24.714318 | orchestrator | Tuesday 31 March 2026 02:56:35 +0000 (0:00:12.355) 0:05:36.460 ********* 2026-03-31 02:57:24.714327 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:57:24.714336 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:57:24.714345 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:57:24.714354 | orchestrator | 2026-03-31 02:57:24.714363 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-31 02:57:24.714373 | orchestrator | Tuesday 31 March 2026 02:56:35 +0000 (0:00:00.409) 0:05:36.869 ********* 2026-03-31 02:57:24.714382 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-31 02:57:24.714391 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-31 02:57:24.714400 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-31 02:57:24.714409 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 02:57:24.714418 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-31 02:57:24.714427 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 02:57:24.714435 | orchestrator | 2026-03-31 02:57:24.714444 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-31 02:57:24.714453 | orchestrator | Tuesday 31 March 2026 02:56:38 +0000 (0:00:02.763) 0:05:39.632 ********* 2026-03-31 02:57:24.714462 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-31 02:57:24.714471 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-31 02:57:24.714480 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-31 02:57:24.714489 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-31 02:57:24.714500 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-31 02:57:24.714514 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-31 02:57:24.714527 | orchestrator | 2026-03-31 02:57:24.714539 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-31 02:57:24.714551 | orchestrator | Tuesday 31 March 2026 02:56:40 +0000 (0:00:01.289) 0:05:40.922 ********* 2026-03-31 02:57:24.714571 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:57:24.714587 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:57:24.714600 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:57:24.714613 | orchestrator | 2026-03-31 02:57:24.714645 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-31 02:57:24.714668 | orchestrator | Tuesday 31 March 2026 02:56:40 +0000 (0:00:00.726) 0:05:41.649 ********* 2026-03-31 02:57:24.714682 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:57:24.714694 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:57:24.714705 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:57:24.714718 | orchestrator | 2026-03-31 02:57:24.714732 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-31 02:57:24.714744 | orchestrator | Tuesday 31 March 2026 02:56:41 +0000 (0:00:00.365) 0:05:42.015 ********* 2026-03-31 02:57:24.714770 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:57:24.714783 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:57:24.714796 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:57:24.714809 | orchestrator | 2026-03-31 02:57:24.714820 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-31 02:57:24.714829 | orchestrator | Tuesday 31 March 2026 02:56:41 +0000 (0:00:00.743) 0:05:42.758 ********* 2026-03-31 02:57:24.714837 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:57:24.714844 | orchestrator | 2026-03-31 02:57:24.714852 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-31 02:57:24.714860 | orchestrator | Tuesday 31 March 2026 02:56:42 +0000 (0:00:00.634) 0:05:43.393 ********* 2026-03-31 02:57:24.714868 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:57:24.714876 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:57:24.714883 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:57:24.714891 | orchestrator | 2026-03-31 02:57:24.714899 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-31 02:57:24.714907 | orchestrator | Tuesday 31 March 2026 02:56:42 +0000 (0:00:00.391) 0:05:43.784 ********* 2026-03-31 02:57:24.714914 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:57:24.714922 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:57:24.714930 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:57:24.714937 | orchestrator | 2026-03-31 02:57:24.714945 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-31 02:57:24.714953 | orchestrator | Tuesday 31 March 2026 02:56:43 +0000 (0:00:00.766) 0:05:44.551 ********* 2026-03-31 02:57:24.714960 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:57:24.714969 | orchestrator | 2026-03-31 02:57:24.714977 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-31 02:57:24.715060 | orchestrator | Tuesday 31 March 2026 02:56:44 +0000 (0:00:00.663) 0:05:45.214 ********* 2026-03-31 02:57:24.715071 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:57:24.715080 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:57:24.715087 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:57:24.715095 | orchestrator | 2026-03-31 02:57:24.715103 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-31 02:57:24.715111 | orchestrator | Tuesday 31 March 2026 02:56:45 +0000 (0:00:01.367) 0:05:46.582 ********* 2026-03-31 02:57:24.715119 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:57:24.715127 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:57:24.715134 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:57:24.715142 | orchestrator | 2026-03-31 02:57:24.715150 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-31 02:57:24.715158 | orchestrator | Tuesday 31 March 2026 02:56:47 +0000 (0:00:01.624) 0:05:48.206 ********* 2026-03-31 02:57:24.715165 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:57:24.715173 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:57:24.715181 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:57:24.715189 | orchestrator | 2026-03-31 02:57:24.715197 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-31 02:57:24.715212 | orchestrator | Tuesday 31 March 2026 02:56:49 +0000 (0:00:01.892) 0:05:50.099 ********* 2026-03-31 02:57:24.715220 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:57:24.715228 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:57:24.715236 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:57:24.715244 | orchestrator | 2026-03-31 02:57:24.715251 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-31 02:57:24.715259 | orchestrator | Tuesday 31 March 2026 02:56:51 +0000 (0:00:02.069) 0:05:52.168 ********* 2026-03-31 02:57:24.715267 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:57:24.715275 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:57:24.715282 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-03-31 02:57:24.715297 | orchestrator | 2026-03-31 02:57:24.715305 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-03-31 02:57:24.715312 | orchestrator | Tuesday 31 March 2026 02:56:52 +0000 (0:00:00.720) 0:05:52.889 ********* 2026-03-31 02:57:24.715320 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-03-31 02:57:24.715328 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-03-31 02:57:24.715336 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-03-31 02:57:24.715344 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-03-31 02:57:24.715352 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-03-31 02:57:24.715359 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-31 02:57:24.715367 | orchestrator | 2026-03-31 02:57:24.715375 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-03-31 02:57:24.715383 | orchestrator | Tuesday 31 March 2026 02:57:23 +0000 (0:00:31.399) 0:06:24.289 ********* 2026-03-31 02:57:24.715391 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-31 02:57:24.715399 | orchestrator | 2026-03-31 02:57:24.715414 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-03-31 02:57:52.393620 | orchestrator | Tuesday 31 March 2026 02:57:24 +0000 (0:00:01.282) 0:06:25.571 ********* 2026-03-31 02:57:52.393739 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:57:52.393756 | orchestrator | 2026-03-31 02:57:52.393769 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-03-31 02:57:52.393780 | orchestrator | Tuesday 31 March 2026 02:57:25 +0000 (0:00:00.351) 0:06:25.923 ********* 2026-03-31 02:57:52.393791 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:57:52.393802 | orchestrator | 2026-03-31 02:57:52.393814 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-03-31 02:57:52.393825 | orchestrator | Tuesday 31 March 2026 02:57:25 +0000 (0:00:00.176) 0:06:26.099 ********* 2026-03-31 02:57:52.393836 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-03-31 02:57:52.393847 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-03-31 02:57:52.393858 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-03-31 02:57:52.393874 | orchestrator | 2026-03-31 02:57:52.393892 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-03-31 02:57:52.393911 | orchestrator | Tuesday 31 March 2026 02:57:31 +0000 (0:00:06.421) 0:06:32.520 ********* 2026-03-31 02:57:52.393930 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-03-31 02:57:52.393947 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-03-31 02:57:52.393964 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-03-31 02:57:52.393982 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-03-31 02:57:52.394002 | orchestrator | 2026-03-31 02:57:52.394150 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-31 02:57:52.394163 | orchestrator | Tuesday 31 March 2026 02:57:36 +0000 (0:00:05.281) 0:06:37.801 ********* 2026-03-31 02:57:52.394177 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:57:52.394192 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:57:52.394205 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:57:52.394218 | orchestrator | 2026-03-31 02:57:52.394230 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-31 02:57:52.394243 | orchestrator | Tuesday 31 March 2026 02:57:37 +0000 (0:00:00.849) 0:06:38.651 ********* 2026-03-31 02:57:52.394256 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:57:52.394295 | orchestrator | 2026-03-31 02:57:52.394308 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-31 02:57:52.394321 | orchestrator | Tuesday 31 March 2026 02:57:38 +0000 (0:00:00.589) 0:06:39.241 ********* 2026-03-31 02:57:52.394334 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:57:52.394346 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:57:52.394359 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:57:52.394372 | orchestrator | 2026-03-31 02:57:52.394384 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-31 02:57:52.394396 | orchestrator | Tuesday 31 March 2026 02:57:39 +0000 (0:00:00.639) 0:06:39.880 ********* 2026-03-31 02:57:52.394408 | orchestrator | changed: [testbed-node-1] 2026-03-31 02:57:52.394421 | orchestrator | changed: [testbed-node-0] 2026-03-31 02:57:52.394432 | orchestrator | changed: [testbed-node-2] 2026-03-31 02:57:52.394444 | orchestrator | 2026-03-31 02:57:52.394457 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-31 02:57:52.394470 | orchestrator | Tuesday 31 March 2026 02:57:40 +0000 (0:00:01.269) 0:06:41.149 ********* 2026-03-31 02:57:52.394497 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-31 02:57:52.394509 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-31 02:57:52.394520 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-31 02:57:52.394530 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:57:52.394541 | orchestrator | 2026-03-31 02:57:52.394552 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-31 02:57:52.394562 | orchestrator | Tuesday 31 March 2026 02:57:40 +0000 (0:00:00.674) 0:06:41.823 ********* 2026-03-31 02:57:52.394573 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:57:52.394583 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:57:52.394594 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:57:52.394605 | orchestrator | 2026-03-31 02:57:52.394615 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-03-31 02:57:52.394627 | orchestrator | 2026-03-31 02:57:52.394637 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-31 02:57:52.394648 | orchestrator | Tuesday 31 March 2026 02:57:41 +0000 (0:00:00.899) 0:06:42.723 ********* 2026-03-31 02:57:52.394659 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 02:57:52.394671 | orchestrator | 2026-03-31 02:57:52.394682 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-31 02:57:52.394692 | orchestrator | Tuesday 31 March 2026 02:57:42 +0000 (0:00:00.540) 0:06:43.263 ********* 2026-03-31 02:57:52.394703 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 02:57:52.394714 | orchestrator | 2026-03-31 02:57:52.394724 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-31 02:57:52.394735 | orchestrator | Tuesday 31 March 2026 02:57:43 +0000 (0:00:00.801) 0:06:44.065 ********* 2026-03-31 02:57:52.394745 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:57:52.394756 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:57:52.394767 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:57:52.394777 | orchestrator | 2026-03-31 02:57:52.394788 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-31 02:57:52.394799 | orchestrator | Tuesday 31 March 2026 02:57:43 +0000 (0:00:00.349) 0:06:44.414 ********* 2026-03-31 02:57:52.394809 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:57:52.394840 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:57:52.394853 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:57:52.394864 | orchestrator | 2026-03-31 02:57:52.394874 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-31 02:57:52.394885 | orchestrator | Tuesday 31 March 2026 02:57:44 +0000 (0:00:00.664) 0:06:45.079 ********* 2026-03-31 02:57:52.394905 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:57:52.394915 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:57:52.394926 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:57:52.394937 | orchestrator | 2026-03-31 02:57:52.394947 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-31 02:57:52.394958 | orchestrator | Tuesday 31 March 2026 02:57:44 +0000 (0:00:00.688) 0:06:45.768 ********* 2026-03-31 02:57:52.394969 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:57:52.394979 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:57:52.394990 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:57:52.395000 | orchestrator | 2026-03-31 02:57:52.395030 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-31 02:57:52.395042 | orchestrator | Tuesday 31 March 2026 02:57:45 +0000 (0:00:00.998) 0:06:46.766 ********* 2026-03-31 02:57:52.395052 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:57:52.395063 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:57:52.395074 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:57:52.395084 | orchestrator | 2026-03-31 02:57:52.395095 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-31 02:57:52.395105 | orchestrator | Tuesday 31 March 2026 02:57:46 +0000 (0:00:00.362) 0:06:47.129 ********* 2026-03-31 02:57:52.395116 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:57:52.395127 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:57:52.395137 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:57:52.395148 | orchestrator | 2026-03-31 02:57:52.395158 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-31 02:57:52.395169 | orchestrator | Tuesday 31 March 2026 02:57:46 +0000 (0:00:00.343) 0:06:47.473 ********* 2026-03-31 02:57:52.395180 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:57:52.395190 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:57:52.395201 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:57:52.395211 | orchestrator | 2026-03-31 02:57:52.395222 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-31 02:57:52.395233 | orchestrator | Tuesday 31 March 2026 02:57:46 +0000 (0:00:00.363) 0:06:47.836 ********* 2026-03-31 02:57:52.395244 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:57:52.395254 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:57:52.395265 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:57:52.395276 | orchestrator | 2026-03-31 02:57:52.395286 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-31 02:57:52.395297 | orchestrator | Tuesday 31 March 2026 02:57:47 +0000 (0:00:01.017) 0:06:48.853 ********* 2026-03-31 02:57:52.395308 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:57:52.395318 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:57:52.395329 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:57:52.395339 | orchestrator | 2026-03-31 02:57:52.395350 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-31 02:57:52.395361 | orchestrator | Tuesday 31 March 2026 02:57:48 +0000 (0:00:00.787) 0:06:49.641 ********* 2026-03-31 02:57:52.395372 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:57:52.395382 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:57:52.395393 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:57:52.395404 | orchestrator | 2026-03-31 02:57:52.395414 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-31 02:57:52.395425 | orchestrator | Tuesday 31 March 2026 02:57:49 +0000 (0:00:00.387) 0:06:50.028 ********* 2026-03-31 02:57:52.395436 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:57:52.395446 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:57:52.395457 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:57:52.395467 | orchestrator | 2026-03-31 02:57:52.395484 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-31 02:57:52.395495 | orchestrator | Tuesday 31 March 2026 02:57:49 +0000 (0:00:00.330) 0:06:50.358 ********* 2026-03-31 02:57:52.395506 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:57:52.395516 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:57:52.395533 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:57:52.395544 | orchestrator | 2026-03-31 02:57:52.395555 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-31 02:57:52.395566 | orchestrator | Tuesday 31 March 2026 02:57:50 +0000 (0:00:00.704) 0:06:51.063 ********* 2026-03-31 02:57:52.395578 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:57:52.395595 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:57:52.395613 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:57:52.395632 | orchestrator | 2026-03-31 02:57:52.395651 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-31 02:57:52.395670 | orchestrator | Tuesday 31 March 2026 02:57:50 +0000 (0:00:00.373) 0:06:51.437 ********* 2026-03-31 02:57:52.395688 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:57:52.395706 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:57:52.395724 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:57:52.395741 | orchestrator | 2026-03-31 02:57:52.395759 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-31 02:57:52.395777 | orchestrator | Tuesday 31 March 2026 02:57:50 +0000 (0:00:00.356) 0:06:51.793 ********* 2026-03-31 02:57:52.395794 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:57:52.395812 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:57:52.395832 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:57:52.395851 | orchestrator | 2026-03-31 02:57:52.395869 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-31 02:57:52.395888 | orchestrator | Tuesday 31 March 2026 02:57:51 +0000 (0:00:00.398) 0:06:52.191 ********* 2026-03-31 02:57:52.395907 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:57:52.395926 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:57:52.395944 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:57:52.395962 | orchestrator | 2026-03-31 02:57:52.395981 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-31 02:57:52.396000 | orchestrator | Tuesday 31 March 2026 02:57:51 +0000 (0:00:00.667) 0:06:52.858 ********* 2026-03-31 02:57:52.396089 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:57:52.396110 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:57:52.396128 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:57:52.396146 | orchestrator | 2026-03-31 02:57:52.396178 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-31 02:58:53.223459 | orchestrator | Tuesday 31 March 2026 02:57:52 +0000 (0:00:00.392) 0:06:53.251 ********* 2026-03-31 02:58:53.223557 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:58:53.223570 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:58:53.223578 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:58:53.223586 | orchestrator | 2026-03-31 02:58:53.223595 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-31 02:58:53.223604 | orchestrator | Tuesday 31 March 2026 02:57:52 +0000 (0:00:00.365) 0:06:53.616 ********* 2026-03-31 02:58:53.223612 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:58:53.223620 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:58:53.223628 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:58:53.223636 | orchestrator | 2026-03-31 02:58:53.223644 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-31 02:58:53.223652 | orchestrator | Tuesday 31 March 2026 02:57:53 +0000 (0:00:00.871) 0:06:54.487 ********* 2026-03-31 02:58:53.223660 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:58:53.223667 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:58:53.223675 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:58:53.223683 | orchestrator | 2026-03-31 02:58:53.223691 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-31 02:58:53.223699 | orchestrator | Tuesday 31 March 2026 02:57:53 +0000 (0:00:00.350) 0:06:54.838 ********* 2026-03-31 02:58:53.223707 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 02:58:53.223715 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 02:58:53.223757 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 02:58:53.223772 | orchestrator | 2026-03-31 02:58:53.223787 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-31 02:58:53.223803 | orchestrator | Tuesday 31 March 2026 02:57:54 +0000 (0:00:00.708) 0:06:55.546 ********* 2026-03-31 02:58:53.223818 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 02:58:53.223832 | orchestrator | 2026-03-31 02:58:53.223848 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-31 02:58:53.223862 | orchestrator | Tuesday 31 March 2026 02:57:55 +0000 (0:00:00.886) 0:06:56.433 ********* 2026-03-31 02:58:53.223877 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:58:53.223887 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:58:53.223895 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:58:53.223905 | orchestrator | 2026-03-31 02:58:53.223918 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-31 02:58:53.223932 | orchestrator | Tuesday 31 March 2026 02:57:55 +0000 (0:00:00.352) 0:06:56.785 ********* 2026-03-31 02:58:53.223945 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:58:53.223959 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:58:53.223967 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:58:53.223975 | orchestrator | 2026-03-31 02:58:53.223982 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-31 02:58:53.223991 | orchestrator | Tuesday 31 March 2026 02:57:56 +0000 (0:00:00.359) 0:06:57.145 ********* 2026-03-31 02:58:53.224000 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:58:53.224009 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:58:53.224018 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:58:53.224027 | orchestrator | 2026-03-31 02:58:53.224035 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-31 02:58:53.224044 | orchestrator | Tuesday 31 March 2026 02:57:57 +0000 (0:00:00.828) 0:06:57.974 ********* 2026-03-31 02:58:53.224080 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:58:53.224104 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:58:53.224115 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:58:53.224129 | orchestrator | 2026-03-31 02:58:53.224142 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-31 02:58:53.224156 | orchestrator | Tuesday 31 March 2026 02:57:57 +0000 (0:00:00.632) 0:06:58.606 ********* 2026-03-31 02:58:53.224170 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-31 02:58:53.224186 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-31 02:58:53.224201 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-31 02:58:53.224216 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-31 02:58:53.224231 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-31 02:58:53.224245 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-31 02:58:53.224259 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-31 02:58:53.224269 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-31 02:58:53.224278 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-31 02:58:53.224287 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-31 02:58:53.224296 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-31 02:58:53.224305 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-31 02:58:53.224313 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-31 02:58:53.224330 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-31 02:58:53.224339 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-31 02:58:53.224348 | orchestrator | 2026-03-31 02:58:53.224372 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-31 02:58:53.224381 | orchestrator | Tuesday 31 March 2026 02:58:00 +0000 (0:00:03.261) 0:07:01.868 ********* 2026-03-31 02:58:53.224390 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:58:53.224400 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:58:53.224409 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:58:53.224418 | orchestrator | 2026-03-31 02:58:53.224427 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-31 02:58:53.224435 | orchestrator | Tuesday 31 March 2026 02:58:01 +0000 (0:00:00.338) 0:07:02.206 ********* 2026-03-31 02:58:53.224443 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 02:58:53.224456 | orchestrator | 2026-03-31 02:58:53.224469 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-31 02:58:53.224482 | orchestrator | Tuesday 31 March 2026 02:58:02 +0000 (0:00:00.851) 0:07:03.057 ********* 2026-03-31 02:58:53.224496 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-31 02:58:53.224509 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-31 02:58:53.224523 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-31 02:58:53.224536 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-03-31 02:58:53.224550 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-03-31 02:58:53.224558 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-03-31 02:58:53.224566 | orchestrator | 2026-03-31 02:58:53.224574 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-31 02:58:53.224582 | orchestrator | Tuesday 31 March 2026 02:58:03 +0000 (0:00:01.052) 0:07:04.110 ********* 2026-03-31 02:58:53.224589 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 02:58:53.224602 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-31 02:58:53.224615 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-31 02:58:53.224629 | orchestrator | 2026-03-31 02:58:53.224643 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-31 02:58:53.224656 | orchestrator | Tuesday 31 March 2026 02:58:05 +0000 (0:00:02.293) 0:07:06.403 ********* 2026-03-31 02:58:53.224669 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-31 02:58:53.224677 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-31 02:58:53.224684 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:58:53.224692 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-31 02:58:53.224700 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-31 02:58:53.224708 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:58:53.224716 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-31 02:58:53.224723 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-31 02:58:53.224731 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:58:53.224739 | orchestrator | 2026-03-31 02:58:53.224746 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-31 02:58:53.224754 | orchestrator | Tuesday 31 March 2026 02:58:06 +0000 (0:00:01.219) 0:07:07.623 ********* 2026-03-31 02:58:53.224762 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-31 02:58:53.224770 | orchestrator | 2026-03-31 02:58:53.224778 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-31 02:58:53.224791 | orchestrator | Tuesday 31 March 2026 02:58:08 +0000 (0:00:02.213) 0:07:09.837 ********* 2026-03-31 02:58:53.224799 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 02:58:53.224813 | orchestrator | 2026-03-31 02:58:53.224821 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-03-31 02:58:53.224829 | orchestrator | Tuesday 31 March 2026 02:58:09 +0000 (0:00:00.855) 0:07:10.692 ********* 2026-03-31 02:58:53.224838 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb', 'data_vg': 'ceph-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb'}) 2026-03-31 02:58:53.224847 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-dad98f55-09f4-5a2b-a5c7-aafce2660c53', 'data_vg': 'ceph-dad98f55-09f4-5a2b-a5c7-aafce2660c53'}) 2026-03-31 02:58:53.224855 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-07ced279-a583-5107-8220-95f80fc10ac7', 'data_vg': 'ceph-07ced279-a583-5107-8220-95f80fc10ac7'}) 2026-03-31 02:58:53.224863 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-67174221-9040-517a-ae84-daf8ebd704d7', 'data_vg': 'ceph-67174221-9040-517a-ae84-daf8ebd704d7'}) 2026-03-31 02:58:53.224871 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-da0b55d5-13d5-528b-aee2-5667f342587c', 'data_vg': 'ceph-da0b55d5-13d5-528b-aee2-5667f342587c'}) 2026-03-31 02:58:53.224879 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-185c377e-da3e-5428-98db-747be321d2f9', 'data_vg': 'ceph-185c377e-da3e-5428-98db-747be321d2f9'}) 2026-03-31 02:58:53.224886 | orchestrator | 2026-03-31 02:58:53.224894 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-31 02:58:53.224902 | orchestrator | Tuesday 31 March 2026 02:58:52 +0000 (0:00:43.024) 0:07:53.717 ********* 2026-03-31 02:58:53.224910 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:58:53.224918 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:58:53.224925 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:58:53.224933 | orchestrator | 2026-03-31 02:58:53.224941 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-31 02:58:53.224954 | orchestrator | Tuesday 31 March 2026 02:58:53 +0000 (0:00:00.365) 0:07:54.083 ********* 2026-03-31 02:59:32.577291 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 02:59:32.577403 | orchestrator | 2026-03-31 02:59:32.577420 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-31 02:59:32.577433 | orchestrator | Tuesday 31 March 2026 02:58:54 +0000 (0:00:00.879) 0:07:54.962 ********* 2026-03-31 02:59:32.577444 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:59:32.577456 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:59:32.577467 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:59:32.577478 | orchestrator | 2026-03-31 02:59:32.577489 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-31 02:59:32.577501 | orchestrator | Tuesday 31 March 2026 02:58:54 +0000 (0:00:00.688) 0:07:55.651 ********* 2026-03-31 02:59:32.577512 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:59:32.577523 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:59:32.577533 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:59:32.577544 | orchestrator | 2026-03-31 02:59:32.577555 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-31 02:59:32.577565 | orchestrator | Tuesday 31 March 2026 02:58:57 +0000 (0:00:02.646) 0:07:58.298 ********* 2026-03-31 02:59:32.577576 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 02:59:32.577588 | orchestrator | 2026-03-31 02:59:32.577599 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-31 02:59:32.577609 | orchestrator | Tuesday 31 March 2026 02:58:58 +0000 (0:00:00.903) 0:07:59.202 ********* 2026-03-31 02:59:32.577620 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:59:32.577632 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:59:32.577643 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:59:32.577654 | orchestrator | 2026-03-31 02:59:32.577689 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-31 02:59:32.577701 | orchestrator | Tuesday 31 March 2026 02:58:59 +0000 (0:00:01.244) 0:08:00.446 ********* 2026-03-31 02:59:32.577712 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:59:32.577723 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:59:32.577734 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:59:32.577744 | orchestrator | 2026-03-31 02:59:32.577755 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-31 02:59:32.577766 | orchestrator | Tuesday 31 March 2026 02:59:00 +0000 (0:00:01.232) 0:08:01.679 ********* 2026-03-31 02:59:32.577776 | orchestrator | changed: [testbed-node-3] 2026-03-31 02:59:32.577787 | orchestrator | changed: [testbed-node-4] 2026-03-31 02:59:32.577798 | orchestrator | changed: [testbed-node-5] 2026-03-31 02:59:32.577808 | orchestrator | 2026-03-31 02:59:32.577822 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-31 02:59:32.577835 | orchestrator | Tuesday 31 March 2026 02:59:02 +0000 (0:00:02.050) 0:08:03.729 ********* 2026-03-31 02:59:32.577848 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:59:32.577860 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:59:32.577873 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:59:32.577886 | orchestrator | 2026-03-31 02:59:32.577899 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-31 02:59:32.577911 | orchestrator | Tuesday 31 March 2026 02:59:03 +0000 (0:00:00.367) 0:08:04.096 ********* 2026-03-31 02:59:32.577924 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:59:32.577936 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:59:32.577948 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:59:32.577960 | orchestrator | 2026-03-31 02:59:32.577972 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-31 02:59:32.578000 | orchestrator | Tuesday 31 March 2026 02:59:03 +0000 (0:00:00.370) 0:08:04.466 ********* 2026-03-31 02:59:32.578014 | orchestrator | ok: [testbed-node-3] => (item=2) 2026-03-31 02:59:32.578136 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-03-31 02:59:32.578151 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-03-31 02:59:32.578162 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-03-31 02:59:32.578173 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-03-31 02:59:32.578183 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-31 02:59:32.578194 | orchestrator | 2026-03-31 02:59:32.578205 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-31 02:59:32.578215 | orchestrator | Tuesday 31 March 2026 02:59:04 +0000 (0:00:01.023) 0:08:05.490 ********* 2026-03-31 02:59:32.578226 | orchestrator | changed: [testbed-node-3] => (item=2) 2026-03-31 02:59:32.578237 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-31 02:59:32.578248 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-03-31 02:59:32.578259 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-03-31 02:59:32.578269 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-03-31 02:59:32.578280 | orchestrator | changed: [testbed-node-5] => (item=0) 2026-03-31 02:59:32.578291 | orchestrator | 2026-03-31 02:59:32.578302 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-31 02:59:32.578312 | orchestrator | Tuesday 31 March 2026 02:59:07 +0000 (0:00:02.554) 0:08:08.044 ********* 2026-03-31 02:59:32.578323 | orchestrator | changed: [testbed-node-3] => (item=2) 2026-03-31 02:59:32.578334 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-03-31 02:59:32.578344 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-31 02:59:32.578355 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-03-31 02:59:32.578365 | orchestrator | changed: [testbed-node-5] => (item=0) 2026-03-31 02:59:32.578376 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-03-31 02:59:32.578386 | orchestrator | 2026-03-31 02:59:32.578397 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-31 02:59:32.578408 | orchestrator | Tuesday 31 March 2026 02:59:10 +0000 (0:00:03.707) 0:08:11.752 ********* 2026-03-31 02:59:32.578429 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:59:32.578440 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:59:32.578450 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-31 02:59:32.578461 | orchestrator | 2026-03-31 02:59:32.578472 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-31 02:59:32.578502 | orchestrator | Tuesday 31 March 2026 02:59:13 +0000 (0:00:02.536) 0:08:14.288 ********* 2026-03-31 02:59:32.578514 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:59:32.578524 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:59:32.578535 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-03-31 02:59:32.578546 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-31 02:59:32.578557 | orchestrator | 2026-03-31 02:59:32.578567 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-31 02:59:32.578578 | orchestrator | Tuesday 31 March 2026 02:59:26 +0000 (0:00:12.772) 0:08:27.061 ********* 2026-03-31 02:59:32.578588 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:59:32.578599 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:59:32.578609 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:59:32.578620 | orchestrator | 2026-03-31 02:59:32.578631 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-31 02:59:32.578641 | orchestrator | Tuesday 31 March 2026 02:59:27 +0000 (0:00:01.306) 0:08:28.367 ********* 2026-03-31 02:59:32.578652 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:59:32.578663 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:59:32.578673 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:59:32.578684 | orchestrator | 2026-03-31 02:59:32.578694 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-31 02:59:32.578705 | orchestrator | Tuesday 31 March 2026 02:59:27 +0000 (0:00:00.350) 0:08:28.717 ********* 2026-03-31 02:59:32.578715 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 02:59:32.578726 | orchestrator | 2026-03-31 02:59:32.578737 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-31 02:59:32.578747 | orchestrator | Tuesday 31 March 2026 02:59:28 +0000 (0:00:00.881) 0:08:29.599 ********* 2026-03-31 02:59:32.578758 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-31 02:59:32.578768 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-31 02:59:32.578779 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-31 02:59:32.578789 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:59:32.578800 | orchestrator | 2026-03-31 02:59:32.578810 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-31 02:59:32.578821 | orchestrator | Tuesday 31 March 2026 02:59:29 +0000 (0:00:00.479) 0:08:30.078 ********* 2026-03-31 02:59:32.578832 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:59:32.578842 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:59:32.578853 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:59:32.578864 | orchestrator | 2026-03-31 02:59:32.578874 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-31 02:59:32.578885 | orchestrator | Tuesday 31 March 2026 02:59:29 +0000 (0:00:00.366) 0:08:30.445 ********* 2026-03-31 02:59:32.578895 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:59:32.578906 | orchestrator | 2026-03-31 02:59:32.578916 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-31 02:59:32.578927 | orchestrator | Tuesday 31 March 2026 02:59:29 +0000 (0:00:00.241) 0:08:30.686 ********* 2026-03-31 02:59:32.578938 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:59:32.578948 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:59:32.578959 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:59:32.578969 | orchestrator | 2026-03-31 02:59:32.578980 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-31 02:59:32.579003 | orchestrator | Tuesday 31 March 2026 02:59:30 +0000 (0:00:00.626) 0:08:31.313 ********* 2026-03-31 02:59:32.579014 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:59:32.579025 | orchestrator | 2026-03-31 02:59:32.579036 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-31 02:59:32.579046 | orchestrator | Tuesday 31 March 2026 02:59:30 +0000 (0:00:00.243) 0:08:31.557 ********* 2026-03-31 02:59:32.579057 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:59:32.579068 | orchestrator | 2026-03-31 02:59:32.579078 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-31 02:59:32.579117 | orchestrator | Tuesday 31 March 2026 02:59:30 +0000 (0:00:00.242) 0:08:31.800 ********* 2026-03-31 02:59:32.579129 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:59:32.579140 | orchestrator | 2026-03-31 02:59:32.579151 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-31 02:59:32.579161 | orchestrator | Tuesday 31 March 2026 02:59:31 +0000 (0:00:00.147) 0:08:31.947 ********* 2026-03-31 02:59:32.579172 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:59:32.579183 | orchestrator | 2026-03-31 02:59:32.579193 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-31 02:59:32.579204 | orchestrator | Tuesday 31 March 2026 02:59:31 +0000 (0:00:00.249) 0:08:32.196 ********* 2026-03-31 02:59:32.579215 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:59:32.579226 | orchestrator | 2026-03-31 02:59:32.579236 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-31 02:59:32.579247 | orchestrator | Tuesday 31 March 2026 02:59:31 +0000 (0:00:00.229) 0:08:32.426 ********* 2026-03-31 02:59:32.579258 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-31 02:59:32.579269 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-31 02:59:32.579280 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-31 02:59:32.579290 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:59:32.579301 | orchestrator | 2026-03-31 02:59:32.579312 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-31 02:59:32.579322 | orchestrator | Tuesday 31 March 2026 02:59:31 +0000 (0:00:00.413) 0:08:32.839 ********* 2026-03-31 02:59:32.579333 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:59:32.579344 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:59:32.579354 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:59:32.579365 | orchestrator | 2026-03-31 02:59:32.579376 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-31 02:59:32.579386 | orchestrator | Tuesday 31 March 2026 02:59:32 +0000 (0:00:00.356) 0:08:33.196 ********* 2026-03-31 02:59:32.579405 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:59:55.180142 | orchestrator | 2026-03-31 02:59:55.180246 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-31 02:59:55.180259 | orchestrator | Tuesday 31 March 2026 02:59:32 +0000 (0:00:00.240) 0:08:33.437 ********* 2026-03-31 02:59:55.180266 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:59:55.180275 | orchestrator | 2026-03-31 02:59:55.180282 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-03-31 02:59:55.180290 | orchestrator | 2026-03-31 02:59:55.180297 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-31 02:59:55.180304 | orchestrator | Tuesday 31 March 2026 02:59:33 +0000 (0:00:01.341) 0:08:34.778 ********* 2026-03-31 02:59:55.180312 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:59:55.180321 | orchestrator | 2026-03-31 02:59:55.180328 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-31 02:59:55.180335 | orchestrator | Tuesday 31 March 2026 02:59:35 +0000 (0:00:01.320) 0:08:36.099 ********* 2026-03-31 02:59:55.180342 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 02:59:55.180370 | orchestrator | 2026-03-31 02:59:55.180377 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-31 02:59:55.180383 | orchestrator | Tuesday 31 March 2026 02:59:36 +0000 (0:00:01.434) 0:08:37.533 ********* 2026-03-31 02:59:55.180390 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:59:55.180396 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:59:55.180402 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:59:55.180409 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:59:55.180416 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:59:55.180422 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:59:55.180428 | orchestrator | 2026-03-31 02:59:55.180434 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-31 02:59:55.180441 | orchestrator | Tuesday 31 March 2026 02:59:38 +0000 (0:00:01.461) 0:08:38.994 ********* 2026-03-31 02:59:55.180448 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:59:55.180454 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:59:55.180461 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:59:55.180468 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:59:55.180475 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:59:55.180481 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:59:55.180488 | orchestrator | 2026-03-31 02:59:55.180495 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-31 02:59:55.180502 | orchestrator | Tuesday 31 March 2026 02:59:38 +0000 (0:00:00.772) 0:08:39.767 ********* 2026-03-31 02:59:55.180509 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:59:55.180516 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:59:55.180523 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:59:55.180530 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:59:55.180537 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:59:55.180543 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:59:55.180550 | orchestrator | 2026-03-31 02:59:55.180557 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-31 02:59:55.180564 | orchestrator | Tuesday 31 March 2026 02:59:39 +0000 (0:00:00.937) 0:08:40.705 ********* 2026-03-31 02:59:55.180571 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:59:55.180578 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:59:55.180585 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:59:55.180604 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:59:55.180610 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:59:55.180617 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:59:55.180624 | orchestrator | 2026-03-31 02:59:55.180631 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-31 02:59:55.180638 | orchestrator | Tuesday 31 March 2026 02:59:40 +0000 (0:00:00.740) 0:08:41.445 ********* 2026-03-31 02:59:55.180645 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:59:55.180652 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:59:55.180659 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:59:55.180666 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:59:55.180673 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:59:55.180680 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:59:55.180687 | orchestrator | 2026-03-31 02:59:55.180694 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-31 02:59:55.180702 | orchestrator | Tuesday 31 March 2026 02:59:41 +0000 (0:00:01.392) 0:08:42.837 ********* 2026-03-31 02:59:55.180709 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:59:55.180716 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:59:55.180723 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:59:55.180730 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:59:55.180738 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:59:55.180745 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:59:55.180752 | orchestrator | 2026-03-31 02:59:55.180759 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-31 02:59:55.180775 | orchestrator | Tuesday 31 March 2026 02:59:42 +0000 (0:00:00.688) 0:08:43.526 ********* 2026-03-31 02:59:55.180787 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:59:55.180800 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:59:55.180813 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:59:55.180825 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:59:55.180838 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:59:55.180850 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:59:55.180860 | orchestrator | 2026-03-31 02:59:55.180867 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-31 02:59:55.180874 | orchestrator | Tuesday 31 March 2026 02:59:43 +0000 (0:00:00.962) 0:08:44.488 ********* 2026-03-31 02:59:55.180882 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:59:55.180888 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:59:55.180894 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:59:55.180900 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:59:55.180907 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:59:55.180913 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:59:55.180920 | orchestrator | 2026-03-31 02:59:55.180926 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-31 02:59:55.180947 | orchestrator | Tuesday 31 March 2026 02:59:44 +0000 (0:00:01.092) 0:08:45.581 ********* 2026-03-31 02:59:55.180954 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:59:55.180960 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:59:55.180967 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:59:55.180973 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:59:55.180980 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:59:55.180986 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:59:55.180992 | orchestrator | 2026-03-31 02:59:55.180998 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-31 02:59:55.181004 | orchestrator | Tuesday 31 March 2026 02:59:46 +0000 (0:00:01.878) 0:08:47.460 ********* 2026-03-31 02:59:55.181011 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:59:55.181017 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:59:55.181024 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:59:55.181031 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:59:55.181038 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:59:55.181045 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:59:55.181051 | orchestrator | 2026-03-31 02:59:55.181058 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-31 02:59:55.181065 | orchestrator | Tuesday 31 March 2026 02:59:47 +0000 (0:00:00.657) 0:08:48.118 ********* 2026-03-31 02:59:55.181071 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:59:55.181078 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:59:55.181085 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:59:55.181092 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:59:55.181099 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:59:55.181106 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:59:55.181158 | orchestrator | 2026-03-31 02:59:55.181165 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-31 02:59:55.181171 | orchestrator | Tuesday 31 March 2026 02:59:48 +0000 (0:00:00.905) 0:08:49.023 ********* 2026-03-31 02:59:55.181178 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:59:55.181185 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:59:55.181191 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:59:55.181198 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:59:55.181205 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:59:55.181211 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:59:55.181217 | orchestrator | 2026-03-31 02:59:55.181224 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-31 02:59:55.181230 | orchestrator | Tuesday 31 March 2026 02:59:48 +0000 (0:00:00.701) 0:08:49.725 ********* 2026-03-31 02:59:55.181236 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:59:55.181243 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:59:55.181249 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:59:55.181263 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:59:55.181270 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:59:55.181276 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:59:55.181283 | orchestrator | 2026-03-31 02:59:55.181290 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-31 02:59:55.181296 | orchestrator | Tuesday 31 March 2026 02:59:49 +0000 (0:00:00.937) 0:08:50.663 ********* 2026-03-31 02:59:55.181303 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:59:55.181310 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:59:55.181316 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:59:55.181323 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:59:55.181330 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:59:55.181336 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:59:55.181343 | orchestrator | 2026-03-31 02:59:55.181349 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-31 02:59:55.181356 | orchestrator | Tuesday 31 March 2026 02:59:50 +0000 (0:00:00.669) 0:08:51.333 ********* 2026-03-31 02:59:55.181362 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:59:55.181368 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:59:55.181375 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:59:55.181382 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:59:55.181389 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:59:55.181396 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:59:55.181402 | orchestrator | 2026-03-31 02:59:55.181407 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-31 02:59:55.181414 | orchestrator | Tuesday 31 March 2026 02:59:51 +0000 (0:00:00.948) 0:08:52.282 ********* 2026-03-31 02:59:55.181420 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:59:55.181426 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:59:55.181433 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:59:55.181439 | orchestrator | skipping: [testbed-node-0] 2026-03-31 02:59:55.181445 | orchestrator | skipping: [testbed-node-1] 2026-03-31 02:59:55.181452 | orchestrator | skipping: [testbed-node-2] 2026-03-31 02:59:55.181458 | orchestrator | 2026-03-31 02:59:55.181465 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-31 02:59:55.181471 | orchestrator | Tuesday 31 March 2026 02:59:52 +0000 (0:00:00.664) 0:08:52.946 ********* 2026-03-31 02:59:55.181478 | orchestrator | skipping: [testbed-node-3] 2026-03-31 02:59:55.181485 | orchestrator | skipping: [testbed-node-4] 2026-03-31 02:59:55.181492 | orchestrator | skipping: [testbed-node-5] 2026-03-31 02:59:55.181498 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:59:55.181505 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:59:55.181511 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:59:55.181518 | orchestrator | 2026-03-31 02:59:55.181525 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-31 02:59:55.181531 | orchestrator | Tuesday 31 March 2026 02:59:53 +0000 (0:00:00.928) 0:08:53.875 ********* 2026-03-31 02:59:55.181537 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:59:55.181544 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:59:55.181551 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:59:55.181557 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:59:55.181564 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:59:55.181570 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:59:55.181577 | orchestrator | 2026-03-31 02:59:55.181584 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-31 02:59:55.181590 | orchestrator | Tuesday 31 March 2026 02:59:53 +0000 (0:00:00.675) 0:08:54.550 ********* 2026-03-31 02:59:55.181597 | orchestrator | ok: [testbed-node-3] 2026-03-31 02:59:55.181603 | orchestrator | ok: [testbed-node-4] 2026-03-31 02:59:55.181610 | orchestrator | ok: [testbed-node-5] 2026-03-31 02:59:55.181617 | orchestrator | ok: [testbed-node-0] 2026-03-31 02:59:55.181623 | orchestrator | ok: [testbed-node-1] 2026-03-31 02:59:55.181630 | orchestrator | ok: [testbed-node-2] 2026-03-31 02:59:55.181637 | orchestrator | 2026-03-31 02:59:55.181644 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-03-31 02:59:55.181734 | orchestrator | Tuesday 31 March 2026 02:59:55 +0000 (0:00:01.478) 0:08:56.029 ********* 2026-03-31 03:00:28.853938 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-31 03:00:28.854056 | orchestrator | 2026-03-31 03:00:28.854066 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-03-31 03:00:28.854072 | orchestrator | Tuesday 31 March 2026 03:00:00 +0000 (0:00:05.391) 0:09:01.420 ********* 2026-03-31 03:00:28.854078 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-31 03:00:28.854084 | orchestrator | 2026-03-31 03:00:28.854089 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-03-31 03:00:28.854100 | orchestrator | Tuesday 31 March 2026 03:00:03 +0000 (0:00:02.653) 0:09:04.073 ********* 2026-03-31 03:00:28.854106 | orchestrator | changed: [testbed-node-3] 2026-03-31 03:00:28.854111 | orchestrator | changed: [testbed-node-4] 2026-03-31 03:00:28.854116 | orchestrator | changed: [testbed-node-5] 2026-03-31 03:00:28.854122 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:00:28.854128 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:00:28.854133 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:00:28.854138 | orchestrator | 2026-03-31 03:00:28.854143 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-03-31 03:00:28.854148 | orchestrator | Tuesday 31 March 2026 03:00:04 +0000 (0:00:01.585) 0:09:05.659 ********* 2026-03-31 03:00:28.854153 | orchestrator | changed: [testbed-node-3] 2026-03-31 03:00:28.854159 | orchestrator | changed: [testbed-node-4] 2026-03-31 03:00:28.854164 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:00:28.854169 | orchestrator | changed: [testbed-node-5] 2026-03-31 03:00:28.854174 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:00:28.854179 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:00:28.854233 | orchestrator | 2026-03-31 03:00:28.854240 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-03-31 03:00:28.854245 | orchestrator | Tuesday 31 March 2026 03:00:06 +0000 (0:00:01.319) 0:09:06.978 ********* 2026-03-31 03:00:28.854251 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 03:00:28.854257 | orchestrator | 2026-03-31 03:00:28.854262 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-03-31 03:00:28.854267 | orchestrator | Tuesday 31 March 2026 03:00:07 +0000 (0:00:01.402) 0:09:08.381 ********* 2026-03-31 03:00:28.854272 | orchestrator | changed: [testbed-node-3] 2026-03-31 03:00:28.854277 | orchestrator | changed: [testbed-node-4] 2026-03-31 03:00:28.854282 | orchestrator | changed: [testbed-node-5] 2026-03-31 03:00:28.854286 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:00:28.854291 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:00:28.854296 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:00:28.854301 | orchestrator | 2026-03-31 03:00:28.854306 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-03-31 03:00:28.854311 | orchestrator | Tuesday 31 March 2026 03:00:09 +0000 (0:00:01.597) 0:09:09.978 ********* 2026-03-31 03:00:28.854315 | orchestrator | changed: [testbed-node-3] 2026-03-31 03:00:28.854320 | orchestrator | changed: [testbed-node-5] 2026-03-31 03:00:28.854325 | orchestrator | changed: [testbed-node-4] 2026-03-31 03:00:28.854330 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:00:28.854335 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:00:28.854339 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:00:28.854344 | orchestrator | 2026-03-31 03:00:28.854349 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-03-31 03:00:28.854367 | orchestrator | Tuesday 31 March 2026 03:00:13 +0000 (0:00:04.089) 0:09:14.068 ********* 2026-03-31 03:00:28.854372 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 03:00:28.854394 | orchestrator | 2026-03-31 03:00:28.854399 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-03-31 03:00:28.854404 | orchestrator | Tuesday 31 March 2026 03:00:14 +0000 (0:00:01.483) 0:09:15.552 ********* 2026-03-31 03:00:28.854408 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:00:28.854413 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:00:28.854418 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:00:28.854423 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:00:28.854428 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:00:28.854432 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:00:28.854437 | orchestrator | 2026-03-31 03:00:28.854442 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-03-31 03:00:28.854447 | orchestrator | Tuesday 31 March 2026 03:00:15 +0000 (0:00:00.728) 0:09:16.280 ********* 2026-03-31 03:00:28.854452 | orchestrator | changed: [testbed-node-3] 2026-03-31 03:00:28.854457 | orchestrator | changed: [testbed-node-4] 2026-03-31 03:00:28.854462 | orchestrator | changed: [testbed-node-5] 2026-03-31 03:00:28.854466 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:00:28.854471 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:00:28.854476 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:00:28.854481 | orchestrator | 2026-03-31 03:00:28.854486 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-03-31 03:00:28.854491 | orchestrator | Tuesday 31 March 2026 03:00:17 +0000 (0:00:02.591) 0:09:18.871 ********* 2026-03-31 03:00:28.854495 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:00:28.854500 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:00:28.854514 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:00:28.854519 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:00:28.854529 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:00:28.854534 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:00:28.854539 | orchestrator | 2026-03-31 03:00:28.854544 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-03-31 03:00:28.854549 | orchestrator | 2026-03-31 03:00:28.854554 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-31 03:00:28.854559 | orchestrator | Tuesday 31 March 2026 03:00:18 +0000 (0:00:00.967) 0:09:19.839 ********* 2026-03-31 03:00:28.854565 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 03:00:28.854570 | orchestrator | 2026-03-31 03:00:28.854575 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-31 03:00:28.854590 | orchestrator | Tuesday 31 March 2026 03:00:19 +0000 (0:00:00.902) 0:09:20.741 ********* 2026-03-31 03:00:28.854596 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 03:00:28.854601 | orchestrator | 2026-03-31 03:00:28.854605 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-31 03:00:28.854610 | orchestrator | Tuesday 31 March 2026 03:00:20 +0000 (0:00:00.605) 0:09:21.346 ********* 2026-03-31 03:00:28.854615 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:00:28.854620 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:00:28.854625 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:00:28.854629 | orchestrator | 2026-03-31 03:00:28.854634 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-31 03:00:28.854639 | orchestrator | Tuesday 31 March 2026 03:00:21 +0000 (0:00:00.645) 0:09:21.992 ********* 2026-03-31 03:00:28.854644 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:00:28.854648 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:00:28.854653 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:00:28.854658 | orchestrator | 2026-03-31 03:00:28.854663 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-31 03:00:28.854668 | orchestrator | Tuesday 31 March 2026 03:00:21 +0000 (0:00:00.741) 0:09:22.733 ********* 2026-03-31 03:00:28.854672 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:00:28.854677 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:00:28.854686 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:00:28.854691 | orchestrator | 2026-03-31 03:00:28.854696 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-31 03:00:28.854701 | orchestrator | Tuesday 31 March 2026 03:00:22 +0000 (0:00:00.736) 0:09:23.470 ********* 2026-03-31 03:00:28.854706 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:00:28.854710 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:00:28.854715 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:00:28.854720 | orchestrator | 2026-03-31 03:00:28.854725 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-31 03:00:28.854730 | orchestrator | Tuesday 31 March 2026 03:00:23 +0000 (0:00:01.045) 0:09:24.516 ********* 2026-03-31 03:00:28.854734 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:00:28.854739 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:00:28.854744 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:00:28.854749 | orchestrator | 2026-03-31 03:00:28.854753 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-31 03:00:28.854758 | orchestrator | Tuesday 31 March 2026 03:00:23 +0000 (0:00:00.343) 0:09:24.859 ********* 2026-03-31 03:00:28.854763 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:00:28.854768 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:00:28.854773 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:00:28.854777 | orchestrator | 2026-03-31 03:00:28.854782 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-31 03:00:28.854787 | orchestrator | Tuesday 31 March 2026 03:00:24 +0000 (0:00:00.345) 0:09:25.205 ********* 2026-03-31 03:00:28.854792 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:00:28.854796 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:00:28.854801 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:00:28.854806 | orchestrator | 2026-03-31 03:00:28.854811 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-31 03:00:28.854816 | orchestrator | Tuesday 31 March 2026 03:00:24 +0000 (0:00:00.341) 0:09:25.546 ********* 2026-03-31 03:00:28.854820 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:00:28.854825 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:00:28.854830 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:00:28.854835 | orchestrator | 2026-03-31 03:00:28.854843 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-31 03:00:28.854852 | orchestrator | Tuesday 31 March 2026 03:00:25 +0000 (0:00:01.024) 0:09:26.572 ********* 2026-03-31 03:00:28.854859 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:00:28.854867 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:00:28.854874 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:00:28.854881 | orchestrator | 2026-03-31 03:00:28.854890 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-31 03:00:28.854897 | orchestrator | Tuesday 31 March 2026 03:00:26 +0000 (0:00:00.815) 0:09:27.387 ********* 2026-03-31 03:00:28.854905 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:00:28.854912 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:00:28.854919 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:00:28.854927 | orchestrator | 2026-03-31 03:00:28.854935 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-31 03:00:28.854942 | orchestrator | Tuesday 31 March 2026 03:00:26 +0000 (0:00:00.349) 0:09:27.737 ********* 2026-03-31 03:00:28.854950 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:00:28.854958 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:00:28.854965 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:00:28.854970 | orchestrator | 2026-03-31 03:00:28.854974 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-31 03:00:28.854979 | orchestrator | Tuesday 31 March 2026 03:00:27 +0000 (0:00:00.351) 0:09:28.088 ********* 2026-03-31 03:00:28.854984 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:00:28.854988 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:00:28.854993 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:00:28.854998 | orchestrator | 2026-03-31 03:00:28.855003 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-31 03:00:28.855012 | orchestrator | Tuesday 31 March 2026 03:00:27 +0000 (0:00:00.641) 0:09:28.730 ********* 2026-03-31 03:00:28.855017 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:00:28.855021 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:00:28.855026 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:00:28.855031 | orchestrator | 2026-03-31 03:00:28.855036 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-31 03:00:28.855041 | orchestrator | Tuesday 31 March 2026 03:00:28 +0000 (0:00:00.400) 0:09:29.130 ********* 2026-03-31 03:00:28.855045 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:00:28.855050 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:00:28.855055 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:00:28.855060 | orchestrator | 2026-03-31 03:00:28.855064 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-31 03:00:28.855069 | orchestrator | Tuesday 31 March 2026 03:00:28 +0000 (0:00:00.377) 0:09:29.508 ********* 2026-03-31 03:00:28.855078 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:01:06.460096 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:01:06.460200 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:01:06.460212 | orchestrator | 2026-03-31 03:01:06.460223 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-31 03:01:06.460276 | orchestrator | Tuesday 31 March 2026 03:00:28 +0000 (0:00:00.359) 0:09:29.867 ********* 2026-03-31 03:01:06.460285 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:01:06.460294 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:01:06.460303 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:01:06.460312 | orchestrator | 2026-03-31 03:01:06.460321 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-31 03:01:06.460330 | orchestrator | Tuesday 31 March 2026 03:00:29 +0000 (0:00:00.652) 0:09:30.519 ********* 2026-03-31 03:01:06.460339 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:01:06.460347 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:01:06.460356 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:01:06.460364 | orchestrator | 2026-03-31 03:01:06.460373 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-31 03:01:06.460382 | orchestrator | Tuesday 31 March 2026 03:00:30 +0000 (0:00:00.370) 0:09:30.890 ********* 2026-03-31 03:01:06.460391 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:01:06.460401 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:01:06.460409 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:01:06.460418 | orchestrator | 2026-03-31 03:01:06.460427 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-31 03:01:06.460435 | orchestrator | Tuesday 31 March 2026 03:00:30 +0000 (0:00:00.437) 0:09:31.327 ********* 2026-03-31 03:01:06.460444 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:01:06.460453 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:01:06.460461 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:01:06.460470 | orchestrator | 2026-03-31 03:01:06.460478 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-31 03:01:06.460487 | orchestrator | Tuesday 31 March 2026 03:00:31 +0000 (0:00:00.872) 0:09:32.199 ********* 2026-03-31 03:01:06.460495 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:01:06.460504 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:01:06.460513 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-03-31 03:01:06.460522 | orchestrator | 2026-03-31 03:01:06.460531 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-03-31 03:01:06.460539 | orchestrator | Tuesday 31 March 2026 03:00:31 +0000 (0:00:00.473) 0:09:32.673 ********* 2026-03-31 03:01:06.460548 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-31 03:01:06.460557 | orchestrator | 2026-03-31 03:01:06.460566 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-03-31 03:01:06.460574 | orchestrator | Tuesday 31 March 2026 03:00:33 +0000 (0:00:02.080) 0:09:34.754 ********* 2026-03-31 03:01:06.460607 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-03-31 03:01:06.460620 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:01:06.460629 | orchestrator | 2026-03-31 03:01:06.460638 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-03-31 03:01:06.460659 | orchestrator | Tuesday 31 March 2026 03:00:34 +0000 (0:00:00.262) 0:09:35.016 ********* 2026-03-31 03:01:06.460670 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-31 03:01:06.460686 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-31 03:01:06.460695 | orchestrator | 2026-03-31 03:01:06.460704 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-03-31 03:01:06.460712 | orchestrator | Tuesday 31 March 2026 03:00:41 +0000 (0:00:07.799) 0:09:42.815 ********* 2026-03-31 03:01:06.460721 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-31 03:01:06.460729 | orchestrator | 2026-03-31 03:01:06.460738 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-31 03:01:06.460746 | orchestrator | Tuesday 31 March 2026 03:00:45 +0000 (0:00:03.678) 0:09:46.494 ********* 2026-03-31 03:01:06.460755 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 03:01:06.460764 | orchestrator | 2026-03-31 03:01:06.460773 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-31 03:01:06.460782 | orchestrator | Tuesday 31 March 2026 03:00:46 +0000 (0:00:00.874) 0:09:47.369 ********* 2026-03-31 03:01:06.460790 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-31 03:01:06.460798 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-31 03:01:06.460807 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-31 03:01:06.460816 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-03-31 03:01:06.460824 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-03-31 03:01:06.460832 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-03-31 03:01:06.460841 | orchestrator | 2026-03-31 03:01:06.460864 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-31 03:01:06.460874 | orchestrator | Tuesday 31 March 2026 03:00:47 +0000 (0:00:01.076) 0:09:48.445 ********* 2026-03-31 03:01:06.460883 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 03:01:06.460891 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-31 03:01:06.460900 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-31 03:01:06.460909 | orchestrator | 2026-03-31 03:01:06.460917 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-31 03:01:06.460926 | orchestrator | Tuesday 31 March 2026 03:00:49 +0000 (0:00:02.314) 0:09:50.760 ********* 2026-03-31 03:01:06.460935 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-31 03:01:06.460944 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-31 03:01:06.460953 | orchestrator | changed: [testbed-node-3] 2026-03-31 03:01:06.460962 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-31 03:01:06.460971 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-31 03:01:06.460979 | orchestrator | changed: [testbed-node-4] 2026-03-31 03:01:06.460996 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-31 03:01:06.461005 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-31 03:01:06.461014 | orchestrator | changed: [testbed-node-5] 2026-03-31 03:01:06.461022 | orchestrator | 2026-03-31 03:01:06.461031 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-31 03:01:06.461040 | orchestrator | Tuesday 31 March 2026 03:00:51 +0000 (0:00:01.270) 0:09:52.031 ********* 2026-03-31 03:01:06.461048 | orchestrator | changed: [testbed-node-3] 2026-03-31 03:01:06.461057 | orchestrator | changed: [testbed-node-4] 2026-03-31 03:01:06.461066 | orchestrator | changed: [testbed-node-5] 2026-03-31 03:01:06.461074 | orchestrator | 2026-03-31 03:01:06.461083 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-31 03:01:06.461092 | orchestrator | Tuesday 31 March 2026 03:00:54 +0000 (0:00:03.278) 0:09:55.310 ********* 2026-03-31 03:01:06.461101 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:01:06.461109 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:01:06.461118 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:01:06.461127 | orchestrator | 2026-03-31 03:01:06.461135 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-31 03:01:06.461144 | orchestrator | Tuesday 31 March 2026 03:00:54 +0000 (0:00:00.407) 0:09:55.718 ********* 2026-03-31 03:01:06.461153 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 03:01:06.461162 | orchestrator | 2026-03-31 03:01:06.461171 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-31 03:01:06.461179 | orchestrator | Tuesday 31 March 2026 03:00:55 +0000 (0:00:00.940) 0:09:56.658 ********* 2026-03-31 03:01:06.461188 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 03:01:06.461197 | orchestrator | 2026-03-31 03:01:06.461205 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-31 03:01:06.461214 | orchestrator | Tuesday 31 March 2026 03:00:56 +0000 (0:00:00.594) 0:09:57.252 ********* 2026-03-31 03:01:06.461222 | orchestrator | changed: [testbed-node-3] 2026-03-31 03:01:06.461249 | orchestrator | changed: [testbed-node-4] 2026-03-31 03:01:06.461258 | orchestrator | changed: [testbed-node-5] 2026-03-31 03:01:06.461266 | orchestrator | 2026-03-31 03:01:06.461281 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-31 03:01:06.461290 | orchestrator | Tuesday 31 March 2026 03:00:57 +0000 (0:00:01.290) 0:09:58.543 ********* 2026-03-31 03:01:06.461298 | orchestrator | changed: [testbed-node-3] 2026-03-31 03:01:06.461307 | orchestrator | changed: [testbed-node-4] 2026-03-31 03:01:06.461316 | orchestrator | changed: [testbed-node-5] 2026-03-31 03:01:06.461324 | orchestrator | 2026-03-31 03:01:06.461333 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-31 03:01:06.461341 | orchestrator | Tuesday 31 March 2026 03:00:59 +0000 (0:00:01.561) 0:10:00.104 ********* 2026-03-31 03:01:06.461350 | orchestrator | changed: [testbed-node-3] 2026-03-31 03:01:06.461358 | orchestrator | changed: [testbed-node-5] 2026-03-31 03:01:06.461367 | orchestrator | changed: [testbed-node-4] 2026-03-31 03:01:06.461375 | orchestrator | 2026-03-31 03:01:06.461384 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-31 03:01:06.461400 | orchestrator | Tuesday 31 March 2026 03:01:01 +0000 (0:00:01.849) 0:10:01.954 ********* 2026-03-31 03:01:06.461414 | orchestrator | changed: [testbed-node-3] 2026-03-31 03:01:06.461423 | orchestrator | changed: [testbed-node-5] 2026-03-31 03:01:06.461432 | orchestrator | changed: [testbed-node-4] 2026-03-31 03:01:06.461440 | orchestrator | 2026-03-31 03:01:06.461449 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-31 03:01:06.461457 | orchestrator | Tuesday 31 March 2026 03:01:03 +0000 (0:00:01.953) 0:10:03.908 ********* 2026-03-31 03:01:06.461466 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:01:06.461475 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:01:06.461490 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:01:06.461499 | orchestrator | 2026-03-31 03:01:06.461508 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-31 03:01:06.461516 | orchestrator | Tuesday 31 March 2026 03:01:04 +0000 (0:00:01.568) 0:10:05.476 ********* 2026-03-31 03:01:06.461539 | orchestrator | changed: [testbed-node-3] 2026-03-31 03:01:06.461548 | orchestrator | changed: [testbed-node-4] 2026-03-31 03:01:06.461557 | orchestrator | changed: [testbed-node-5] 2026-03-31 03:01:06.461565 | orchestrator | 2026-03-31 03:01:06.461574 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-31 03:01:06.461583 | orchestrator | Tuesday 31 March 2026 03:01:05 +0000 (0:00:00.711) 0:10:06.188 ********* 2026-03-31 03:01:06.461591 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 03:01:06.461600 | orchestrator | 2026-03-31 03:01:06.461609 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-31 03:01:06.461617 | orchestrator | Tuesday 31 March 2026 03:01:06 +0000 (0:00:00.925) 0:10:07.114 ********* 2026-03-31 03:01:06.461632 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:01:28.436729 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:01:28.436831 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:01:28.436843 | orchestrator | 2026-03-31 03:01:28.436854 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-31 03:01:28.436865 | orchestrator | Tuesday 31 March 2026 03:01:06 +0000 (0:00:00.343) 0:10:07.457 ********* 2026-03-31 03:01:28.436873 | orchestrator | changed: [testbed-node-3] 2026-03-31 03:01:28.436883 | orchestrator | changed: [testbed-node-4] 2026-03-31 03:01:28.436891 | orchestrator | changed: [testbed-node-5] 2026-03-31 03:01:28.436898 | orchestrator | 2026-03-31 03:01:28.436905 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-31 03:01:28.436914 | orchestrator | Tuesday 31 March 2026 03:01:07 +0000 (0:00:01.367) 0:10:08.825 ********* 2026-03-31 03:01:28.436923 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-31 03:01:28.436931 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-31 03:01:28.436939 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-31 03:01:28.436948 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:01:28.436957 | orchestrator | 2026-03-31 03:01:28.436966 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-31 03:01:28.436975 | orchestrator | Tuesday 31 March 2026 03:01:08 +0000 (0:00:00.953) 0:10:09.778 ********* 2026-03-31 03:01:28.436984 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:01:28.436992 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:01:28.437000 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:01:28.437008 | orchestrator | 2026-03-31 03:01:28.437016 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-31 03:01:28.437024 | orchestrator | 2026-03-31 03:01:28.437033 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-31 03:01:28.437042 | orchestrator | Tuesday 31 March 2026 03:01:10 +0000 (0:00:01.277) 0:10:11.056 ********* 2026-03-31 03:01:28.437052 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 03:01:28.437062 | orchestrator | 2026-03-31 03:01:28.437071 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-31 03:01:28.437080 | orchestrator | Tuesday 31 March 2026 03:01:10 +0000 (0:00:00.726) 0:10:11.782 ********* 2026-03-31 03:01:28.437089 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 03:01:28.437099 | orchestrator | 2026-03-31 03:01:28.437108 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-31 03:01:28.437117 | orchestrator | Tuesday 31 March 2026 03:01:11 +0000 (0:00:01.083) 0:10:12.866 ********* 2026-03-31 03:01:28.437125 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:01:28.437158 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:01:28.437169 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:01:28.437178 | orchestrator | 2026-03-31 03:01:28.437186 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-31 03:01:28.437195 | orchestrator | Tuesday 31 March 2026 03:01:12 +0000 (0:00:00.393) 0:10:13.259 ********* 2026-03-31 03:01:28.437204 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:01:28.437213 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:01:28.437222 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:01:28.437230 | orchestrator | 2026-03-31 03:01:28.437239 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-31 03:01:28.437320 | orchestrator | Tuesday 31 March 2026 03:01:13 +0000 (0:00:00.848) 0:10:14.107 ********* 2026-03-31 03:01:28.437335 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:01:28.437345 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:01:28.437354 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:01:28.437363 | orchestrator | 2026-03-31 03:01:28.437372 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-31 03:01:28.437381 | orchestrator | Tuesday 31 March 2026 03:01:14 +0000 (0:00:01.261) 0:10:15.369 ********* 2026-03-31 03:01:28.437390 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:01:28.437398 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:01:28.437407 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:01:28.437416 | orchestrator | 2026-03-31 03:01:28.437425 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-31 03:01:28.437434 | orchestrator | Tuesday 31 March 2026 03:01:15 +0000 (0:00:00.812) 0:10:16.182 ********* 2026-03-31 03:01:28.437443 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:01:28.437450 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:01:28.437458 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:01:28.437465 | orchestrator | 2026-03-31 03:01:28.437474 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-31 03:01:28.437484 | orchestrator | Tuesday 31 March 2026 03:01:15 +0000 (0:00:00.456) 0:10:16.638 ********* 2026-03-31 03:01:28.437493 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:01:28.437502 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:01:28.437510 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:01:28.437519 | orchestrator | 2026-03-31 03:01:28.437527 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-31 03:01:28.437535 | orchestrator | Tuesday 31 March 2026 03:01:16 +0000 (0:00:00.339) 0:10:16.978 ********* 2026-03-31 03:01:28.437544 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:01:28.437552 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:01:28.437561 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:01:28.437570 | orchestrator | 2026-03-31 03:01:28.437579 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-31 03:01:28.437588 | orchestrator | Tuesday 31 March 2026 03:01:16 +0000 (0:00:00.637) 0:10:17.615 ********* 2026-03-31 03:01:28.437597 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:01:28.437606 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:01:28.437614 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:01:28.437622 | orchestrator | 2026-03-31 03:01:28.437630 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-31 03:01:28.437638 | orchestrator | Tuesday 31 March 2026 03:01:17 +0000 (0:00:00.780) 0:10:18.396 ********* 2026-03-31 03:01:28.437645 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:01:28.437653 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:01:28.437661 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:01:28.437669 | orchestrator | 2026-03-31 03:01:28.437698 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-31 03:01:28.437708 | orchestrator | Tuesday 31 March 2026 03:01:18 +0000 (0:00:00.766) 0:10:19.162 ********* 2026-03-31 03:01:28.437717 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:01:28.437725 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:01:28.437734 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:01:28.437754 | orchestrator | 2026-03-31 03:01:28.437763 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-31 03:01:28.437772 | orchestrator | Tuesday 31 March 2026 03:01:18 +0000 (0:00:00.314) 0:10:19.477 ********* 2026-03-31 03:01:28.437781 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:01:28.437790 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:01:28.437799 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:01:28.437808 | orchestrator | 2026-03-31 03:01:28.437816 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-31 03:01:28.437825 | orchestrator | Tuesday 31 March 2026 03:01:19 +0000 (0:00:00.626) 0:10:20.104 ********* 2026-03-31 03:01:28.437833 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:01:28.437842 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:01:28.437851 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:01:28.437859 | orchestrator | 2026-03-31 03:01:28.437868 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-31 03:01:28.437877 | orchestrator | Tuesday 31 March 2026 03:01:19 +0000 (0:00:00.378) 0:10:20.482 ********* 2026-03-31 03:01:28.437887 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:01:28.437895 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:01:28.437903 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:01:28.437910 | orchestrator | 2026-03-31 03:01:28.437918 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-31 03:01:28.437926 | orchestrator | Tuesday 31 March 2026 03:01:19 +0000 (0:00:00.380) 0:10:20.863 ********* 2026-03-31 03:01:28.437933 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:01:28.437941 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:01:28.437948 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:01:28.437956 | orchestrator | 2026-03-31 03:01:28.437964 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-31 03:01:28.437973 | orchestrator | Tuesday 31 March 2026 03:01:20 +0000 (0:00:00.352) 0:10:21.215 ********* 2026-03-31 03:01:28.437982 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:01:28.437989 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:01:28.437997 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:01:28.438005 | orchestrator | 2026-03-31 03:01:28.438012 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-31 03:01:28.438075 | orchestrator | Tuesday 31 March 2026 03:01:20 +0000 (0:00:00.626) 0:10:21.842 ********* 2026-03-31 03:01:28.438085 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:01:28.438093 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:01:28.438102 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:01:28.438110 | orchestrator | 2026-03-31 03:01:28.438119 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-31 03:01:28.438128 | orchestrator | Tuesday 31 March 2026 03:01:21 +0000 (0:00:00.376) 0:10:22.219 ********* 2026-03-31 03:01:28.438139 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:01:28.438147 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:01:28.438155 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:01:28.438163 | orchestrator | 2026-03-31 03:01:28.438171 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-31 03:01:28.438180 | orchestrator | Tuesday 31 March 2026 03:01:21 +0000 (0:00:00.376) 0:10:22.596 ********* 2026-03-31 03:01:28.438197 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:01:28.438206 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:01:28.438215 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:01:28.438223 | orchestrator | 2026-03-31 03:01:28.438231 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-31 03:01:28.438239 | orchestrator | Tuesday 31 March 2026 03:01:22 +0000 (0:00:00.440) 0:10:23.036 ********* 2026-03-31 03:01:28.438246 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:01:28.438293 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:01:28.438301 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:01:28.438308 | orchestrator | 2026-03-31 03:01:28.438316 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-31 03:01:28.438333 | orchestrator | Tuesday 31 March 2026 03:01:23 +0000 (0:00:00.954) 0:10:23.990 ********* 2026-03-31 03:01:28.438341 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 03:01:28.438349 | orchestrator | 2026-03-31 03:01:28.438357 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-31 03:01:28.438364 | orchestrator | Tuesday 31 March 2026 03:01:23 +0000 (0:00:00.560) 0:10:24.551 ********* 2026-03-31 03:01:28.438371 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 03:01:28.438378 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-31 03:01:28.438386 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-31 03:01:28.438392 | orchestrator | 2026-03-31 03:01:28.438400 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-31 03:01:28.438407 | orchestrator | Tuesday 31 March 2026 03:01:26 +0000 (0:00:02.653) 0:10:27.204 ********* 2026-03-31 03:01:28.438414 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-31 03:01:28.438422 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-31 03:01:28.438429 | orchestrator | changed: [testbed-node-3] 2026-03-31 03:01:28.438436 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-31 03:01:28.438444 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-31 03:01:28.438451 | orchestrator | changed: [testbed-node-4] 2026-03-31 03:01:28.438458 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-31 03:01:28.438466 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-31 03:01:28.438473 | orchestrator | changed: [testbed-node-5] 2026-03-31 03:01:28.438480 | orchestrator | 2026-03-31 03:01:28.438488 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-31 03:01:28.438495 | orchestrator | Tuesday 31 March 2026 03:01:27 +0000 (0:00:01.647) 0:10:28.852 ********* 2026-03-31 03:01:28.438503 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:01:28.438525 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:02:21.812949 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:02:21.813064 | orchestrator | 2026-03-31 03:02:21.813081 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-31 03:02:21.813094 | orchestrator | Tuesday 31 March 2026 03:01:28 +0000 (0:00:00.443) 0:10:29.295 ********* 2026-03-31 03:02:21.813106 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 03:02:21.813121 | orchestrator | 2026-03-31 03:02:21.813140 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-31 03:02:21.813160 | orchestrator | Tuesday 31 March 2026 03:01:29 +0000 (0:00:00.894) 0:10:30.190 ********* 2026-03-31 03:02:21.813180 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-31 03:02:21.813201 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-31 03:02:21.813220 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-31 03:02:21.813240 | orchestrator | 2026-03-31 03:02:21.813251 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-31 03:02:21.813262 | orchestrator | Tuesday 31 March 2026 03:01:30 +0000 (0:00:00.957) 0:10:31.148 ********* 2026-03-31 03:02:21.813273 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 03:02:21.813285 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-31 03:02:21.813296 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 03:02:21.813398 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-31 03:02:21.813411 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 03:02:21.813423 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-31 03:02:21.813434 | orchestrator | 2026-03-31 03:02:21.813445 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-31 03:02:21.813456 | orchestrator | Tuesday 31 March 2026 03:01:35 +0000 (0:00:04.906) 0:10:36.055 ********* 2026-03-31 03:02:21.813467 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 03:02:21.813478 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-31 03:02:21.813491 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 03:02:21.813518 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-31 03:02:21.813531 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 03:02:21.813544 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-31 03:02:21.813557 | orchestrator | 2026-03-31 03:02:21.813569 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-31 03:02:21.813582 | orchestrator | Tuesday 31 March 2026 03:01:37 +0000 (0:00:02.363) 0:10:38.419 ********* 2026-03-31 03:02:21.813595 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-31 03:02:21.813608 | orchestrator | changed: [testbed-node-3] 2026-03-31 03:02:21.813621 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-31 03:02:21.813633 | orchestrator | changed: [testbed-node-4] 2026-03-31 03:02:21.813645 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-31 03:02:21.813658 | orchestrator | changed: [testbed-node-5] 2026-03-31 03:02:21.813670 | orchestrator | 2026-03-31 03:02:21.813681 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-31 03:02:21.813692 | orchestrator | Tuesday 31 March 2026 03:01:39 +0000 (0:00:01.609) 0:10:40.028 ********* 2026-03-31 03:02:21.813703 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-03-31 03:02:21.813714 | orchestrator | 2026-03-31 03:02:21.813725 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-31 03:02:21.813736 | orchestrator | Tuesday 31 March 2026 03:01:39 +0000 (0:00:00.277) 0:10:40.305 ********* 2026-03-31 03:02:21.813747 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 03:02:21.813758 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 03:02:21.813769 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 03:02:21.813780 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 03:02:21.813791 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 03:02:21.813802 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:02:21.813813 | orchestrator | 2026-03-31 03:02:21.813843 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-31 03:02:21.813855 | orchestrator | Tuesday 31 March 2026 03:01:40 +0000 (0:00:00.646) 0:10:40.951 ********* 2026-03-31 03:02:21.813866 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 03:02:21.813877 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 03:02:21.813895 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 03:02:21.813906 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 03:02:21.813917 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 03:02:21.813928 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:02:21.813939 | orchestrator | 2026-03-31 03:02:21.813950 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-31 03:02:21.813961 | orchestrator | Tuesday 31 March 2026 03:01:40 +0000 (0:00:00.649) 0:10:41.600 ********* 2026-03-31 03:02:21.813971 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-31 03:02:21.813982 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-31 03:02:21.813993 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-31 03:02:21.814004 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-31 03:02:21.814073 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-31 03:02:21.814088 | orchestrator | 2026-03-31 03:02:21.814099 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-31 03:02:21.814110 | orchestrator | Tuesday 31 March 2026 03:02:11 +0000 (0:00:30.418) 0:11:12.019 ********* 2026-03-31 03:02:21.814121 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:02:21.814132 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:02:21.814143 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:02:21.814154 | orchestrator | 2026-03-31 03:02:21.814165 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-31 03:02:21.814176 | orchestrator | Tuesday 31 March 2026 03:02:11 +0000 (0:00:00.361) 0:11:12.381 ********* 2026-03-31 03:02:21.814193 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:02:21.814204 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:02:21.814215 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:02:21.814226 | orchestrator | 2026-03-31 03:02:21.814237 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-31 03:02:21.814248 | orchestrator | Tuesday 31 March 2026 03:02:11 +0000 (0:00:00.362) 0:11:12.744 ********* 2026-03-31 03:02:21.814259 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 03:02:21.814269 | orchestrator | 2026-03-31 03:02:21.814280 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-31 03:02:21.814291 | orchestrator | Tuesday 31 March 2026 03:02:12 +0000 (0:00:00.904) 0:11:13.648 ********* 2026-03-31 03:02:21.814321 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 03:02:21.814334 | orchestrator | 2026-03-31 03:02:21.814345 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-31 03:02:21.814356 | orchestrator | Tuesday 31 March 2026 03:02:13 +0000 (0:00:00.934) 0:11:14.583 ********* 2026-03-31 03:02:21.814367 | orchestrator | changed: [testbed-node-3] 2026-03-31 03:02:21.814378 | orchestrator | changed: [testbed-node-4] 2026-03-31 03:02:21.814388 | orchestrator | changed: [testbed-node-5] 2026-03-31 03:02:21.814399 | orchestrator | 2026-03-31 03:02:21.814410 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-31 03:02:21.814430 | orchestrator | Tuesday 31 March 2026 03:02:15 +0000 (0:00:01.322) 0:11:15.906 ********* 2026-03-31 03:02:21.814441 | orchestrator | changed: [testbed-node-3] 2026-03-31 03:02:21.814452 | orchestrator | changed: [testbed-node-4] 2026-03-31 03:02:21.814462 | orchestrator | changed: [testbed-node-5] 2026-03-31 03:02:21.814473 | orchestrator | 2026-03-31 03:02:21.814484 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-31 03:02:21.814495 | orchestrator | Tuesday 31 March 2026 03:02:16 +0000 (0:00:01.224) 0:11:17.130 ********* 2026-03-31 03:02:21.814506 | orchestrator | changed: [testbed-node-3] 2026-03-31 03:02:21.814516 | orchestrator | changed: [testbed-node-5] 2026-03-31 03:02:21.814527 | orchestrator | changed: [testbed-node-4] 2026-03-31 03:02:21.814538 | orchestrator | 2026-03-31 03:02:21.814549 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-31 03:02:21.814560 | orchestrator | Tuesday 31 March 2026 03:02:19 +0000 (0:00:02.804) 0:11:19.935 ********* 2026-03-31 03:02:21.814571 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-31 03:02:21.814590 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-31 03:02:26.041217 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-31 03:02:26.041428 | orchestrator | 2026-03-31 03:02:26.041455 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-31 03:02:26.041474 | orchestrator | Tuesday 31 March 2026 03:02:21 +0000 (0:00:02.731) 0:11:22.666 ********* 2026-03-31 03:02:26.041491 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:02:26.041509 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:02:26.041527 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:02:26.041544 | orchestrator | 2026-03-31 03:02:26.041561 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-31 03:02:26.041579 | orchestrator | Tuesday 31 March 2026 03:02:22 +0000 (0:00:00.374) 0:11:23.041 ********* 2026-03-31 03:02:26.041596 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 03:02:26.041613 | orchestrator | 2026-03-31 03:02:26.041630 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-31 03:02:26.041646 | orchestrator | Tuesday 31 March 2026 03:02:23 +0000 (0:00:00.900) 0:11:23.941 ********* 2026-03-31 03:02:26.041662 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:02:26.041679 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:02:26.041695 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:02:26.041713 | orchestrator | 2026-03-31 03:02:26.041729 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-31 03:02:26.041746 | orchestrator | Tuesday 31 March 2026 03:02:23 +0000 (0:00:00.382) 0:11:24.324 ********* 2026-03-31 03:02:26.041764 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:02:26.041781 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:02:26.041797 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:02:26.041813 | orchestrator | 2026-03-31 03:02:26.041829 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-31 03:02:26.041845 | orchestrator | Tuesday 31 March 2026 03:02:23 +0000 (0:00:00.386) 0:11:24.710 ********* 2026-03-31 03:02:26.041861 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-31 03:02:26.041877 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-31 03:02:26.041893 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-31 03:02:26.041909 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:02:26.041925 | orchestrator | 2026-03-31 03:02:26.041941 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-31 03:02:26.041958 | orchestrator | Tuesday 31 March 2026 03:02:24 +0000 (0:00:01.066) 0:11:25.777 ********* 2026-03-31 03:02:26.042012 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:02:26.042119 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:02:26.042135 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:02:26.042151 | orchestrator | 2026-03-31 03:02:26.042169 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 03:02:26.042185 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-03-31 03:02:26.042224 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-03-31 03:02:26.042240 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-03-31 03:02:26.042256 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-03-31 03:02:26.042272 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-03-31 03:02:26.042287 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-03-31 03:02:26.042303 | orchestrator | 2026-03-31 03:02:26.042348 | orchestrator | 2026-03-31 03:02:26.042363 | orchestrator | 2026-03-31 03:02:26.042379 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 03:02:26.042396 | orchestrator | Tuesday 31 March 2026 03:02:25 +0000 (0:00:00.583) 0:11:26.361 ********* 2026-03-31 03:02:26.042411 | orchestrator | =============================================================================== 2026-03-31 03:02:26.042427 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 60.61s 2026-03-31 03:02:26.042443 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 43.02s 2026-03-31 03:02:26.042459 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 31.40s 2026-03-31 03:02:26.042475 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.42s 2026-03-31 03:02:26.042491 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.03s 2026-03-31 03:02:26.042507 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.83s 2026-03-31 03:02:26.042522 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.77s 2026-03-31 03:02:26.042538 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 12.36s 2026-03-31 03:02:26.042554 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.19s 2026-03-31 03:02:26.042569 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 7.80s 2026-03-31 03:02:26.042585 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.78s 2026-03-31 03:02:26.042630 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.42s 2026-03-31 03:02:26.042646 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 5.39s 2026-03-31 03:02:26.042662 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.28s 2026-03-31 03:02:26.042680 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.91s 2026-03-31 03:02:26.042696 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 4.09s 2026-03-31 03:02:26.042712 | orchestrator | ceph-container-common : Get ceph version -------------------------------- 3.80s 2026-03-31 03:02:26.042727 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.71s 2026-03-31 03:02:26.042743 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.68s 2026-03-31 03:02:26.042760 | orchestrator | ceph-mds : Create mds keyring ------------------------------------------- 3.28s 2026-03-31 03:02:28.862625 | orchestrator | 2026-03-31 03:02:28 | INFO  | Task 5dccef6b-e1c8-4a3a-b142-ce6a6e363a91 (ceph-pools) was prepared for execution. 2026-03-31 03:02:28.862797 | orchestrator | 2026-03-31 03:02:28 | INFO  | It takes a moment until task 5dccef6b-e1c8-4a3a-b142-ce6a6e363a91 (ceph-pools) has been started and output is visible here. 2026-03-31 03:02:44.036796 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-31 03:02:44.036881 | orchestrator | 2.16.14 2026-03-31 03:02:44.036888 | orchestrator | 2026-03-31 03:02:44.036893 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-03-31 03:02:44.036898 | orchestrator | 2026-03-31 03:02:44.036902 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-31 03:02:44.036907 | orchestrator | Tuesday 31 March 2026 03:02:33 +0000 (0:00:00.674) 0:00:00.674 ********* 2026-03-31 03:02:44.036911 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 03:02:44.036917 | orchestrator | 2026-03-31 03:02:44.036921 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-31 03:02:44.036925 | orchestrator | Tuesday 31 March 2026 03:02:34 +0000 (0:00:00.786) 0:00:01.461 ********* 2026-03-31 03:02:44.036929 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:02:44.036933 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:02:44.036936 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:02:44.036940 | orchestrator | 2026-03-31 03:02:44.036944 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-31 03:02:44.036948 | orchestrator | Tuesday 31 March 2026 03:02:35 +0000 (0:00:00.714) 0:00:02.175 ********* 2026-03-31 03:02:44.036951 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:02:44.036955 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:02:44.036959 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:02:44.036963 | orchestrator | 2026-03-31 03:02:44.036966 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-31 03:02:44.036970 | orchestrator | Tuesday 31 March 2026 03:02:35 +0000 (0:00:00.318) 0:00:02.494 ********* 2026-03-31 03:02:44.036985 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:02:44.036989 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:02:44.036992 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:02:44.036996 | orchestrator | 2026-03-31 03:02:44.037000 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-31 03:02:44.037004 | orchestrator | Tuesday 31 March 2026 03:02:36 +0000 (0:00:00.903) 0:00:03.397 ********* 2026-03-31 03:02:44.037007 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:02:44.037011 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:02:44.037015 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:02:44.037019 | orchestrator | 2026-03-31 03:02:44.037022 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-31 03:02:44.037026 | orchestrator | Tuesday 31 March 2026 03:02:36 +0000 (0:00:00.355) 0:00:03.753 ********* 2026-03-31 03:02:44.037030 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:02:44.037034 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:02:44.037037 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:02:44.037041 | orchestrator | 2026-03-31 03:02:44.037045 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-31 03:02:44.037049 | orchestrator | Tuesday 31 March 2026 03:02:37 +0000 (0:00:00.359) 0:00:04.113 ********* 2026-03-31 03:02:44.037052 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:02:44.037056 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:02:44.037060 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:02:44.037064 | orchestrator | 2026-03-31 03:02:44.037068 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-31 03:02:44.037072 | orchestrator | Tuesday 31 March 2026 03:02:37 +0000 (0:00:00.367) 0:00:04.480 ********* 2026-03-31 03:02:44.037076 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:02:44.037080 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:02:44.037084 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:02:44.037113 | orchestrator | 2026-03-31 03:02:44.037118 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-31 03:02:44.037128 | orchestrator | Tuesday 31 March 2026 03:02:38 +0000 (0:00:00.563) 0:00:05.043 ********* 2026-03-31 03:02:44.037132 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:02:44.037135 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:02:44.037139 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:02:44.037143 | orchestrator | 2026-03-31 03:02:44.037147 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-31 03:02:44.037151 | orchestrator | Tuesday 31 March 2026 03:02:38 +0000 (0:00:00.357) 0:00:05.401 ********* 2026-03-31 03:02:44.037155 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 03:02:44.037158 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 03:02:44.037162 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 03:02:44.037166 | orchestrator | 2026-03-31 03:02:44.037170 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-31 03:02:44.037174 | orchestrator | Tuesday 31 March 2026 03:02:39 +0000 (0:00:00.703) 0:00:06.105 ********* 2026-03-31 03:02:44.037178 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:02:44.037181 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:02:44.037185 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:02:44.037189 | orchestrator | 2026-03-31 03:02:44.037193 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-31 03:02:44.037196 | orchestrator | Tuesday 31 March 2026 03:02:39 +0000 (0:00:00.487) 0:00:06.592 ********* 2026-03-31 03:02:44.037200 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 03:02:44.037204 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 03:02:44.037208 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 03:02:44.037212 | orchestrator | 2026-03-31 03:02:44.037215 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-31 03:02:44.037220 | orchestrator | Tuesday 31 March 2026 03:02:41 +0000 (0:00:02.225) 0:00:08.818 ********* 2026-03-31 03:02:44.037223 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-31 03:02:44.037228 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-31 03:02:44.037232 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-31 03:02:44.037235 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:02:44.037239 | orchestrator | 2026-03-31 03:02:44.037253 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-31 03:02:44.037257 | orchestrator | Tuesday 31 March 2026 03:02:42 +0000 (0:00:00.693) 0:00:09.512 ********* 2026-03-31 03:02:44.037263 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-31 03:02:44.037269 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-31 03:02:44.037273 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-31 03:02:44.037277 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:02:44.037280 | orchestrator | 2026-03-31 03:02:44.037284 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-31 03:02:44.037288 | orchestrator | Tuesday 31 March 2026 03:02:43 +0000 (0:00:01.123) 0:00:10.635 ********* 2026-03-31 03:02:44.037303 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 03:02:44.037310 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 03:02:44.037314 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 03:02:44.037318 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:02:44.037322 | orchestrator | 2026-03-31 03:02:44.037414 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-31 03:02:44.037419 | orchestrator | Tuesday 31 March 2026 03:02:43 +0000 (0:00:00.172) 0:00:10.808 ********* 2026-03-31 03:02:44.037426 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '80cb11f76dbe', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-31 03:02:40.475327', 'end': '2026-03-31 03:02:40.513780', 'delta': '0:00:00.038453', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['80cb11f76dbe'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-31 03:02:44.037433 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '1ea1d727f3e0', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-31 03:02:41.046086', 'end': '2026-03-31 03:02:41.093935', 'delta': '0:00:00.047849', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1ea1d727f3e0'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-31 03:02:44.037443 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'df3f30930c20', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-31 03:02:41.615802', 'end': '2026-03-31 03:02:41.666741', 'delta': '0:00:00.050939', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['df3f30930c20'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-31 03:02:51.518773 | orchestrator | 2026-03-31 03:02:51.518865 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-31 03:02:51.518893 | orchestrator | Tuesday 31 March 2026 03:02:44 +0000 (0:00:00.226) 0:00:11.034 ********* 2026-03-31 03:02:51.518901 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:02:51.518909 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:02:51.518916 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:02:51.518922 | orchestrator | 2026-03-31 03:02:51.518929 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-31 03:02:51.518936 | orchestrator | Tuesday 31 March 2026 03:02:44 +0000 (0:00:00.529) 0:00:11.564 ********* 2026-03-31 03:02:51.518955 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-31 03:02:51.518962 | orchestrator | 2026-03-31 03:02:51.518969 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-31 03:02:51.518976 | orchestrator | Tuesday 31 March 2026 03:02:46 +0000 (0:00:01.771) 0:00:13.336 ********* 2026-03-31 03:02:51.518982 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:02:51.518989 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:02:51.518996 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:02:51.519002 | orchestrator | 2026-03-31 03:02:51.519009 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-31 03:02:51.519016 | orchestrator | Tuesday 31 March 2026 03:02:46 +0000 (0:00:00.323) 0:00:13.659 ********* 2026-03-31 03:02:51.519022 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:02:51.519029 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:02:51.519036 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:02:51.519043 | orchestrator | 2026-03-31 03:02:51.519049 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-31 03:02:51.519056 | orchestrator | Tuesday 31 March 2026 03:02:47 +0000 (0:00:00.967) 0:00:14.626 ********* 2026-03-31 03:02:51.519063 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:02:51.519070 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:02:51.519077 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:02:51.519084 | orchestrator | 2026-03-31 03:02:51.519090 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-31 03:02:51.519097 | orchestrator | Tuesday 31 March 2026 03:02:47 +0000 (0:00:00.319) 0:00:14.946 ********* 2026-03-31 03:02:51.519104 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:02:51.519110 | orchestrator | 2026-03-31 03:02:51.519117 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-31 03:02:51.519124 | orchestrator | Tuesday 31 March 2026 03:02:48 +0000 (0:00:00.140) 0:00:15.087 ********* 2026-03-31 03:02:51.519130 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:02:51.519137 | orchestrator | 2026-03-31 03:02:51.519144 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-31 03:02:51.519151 | orchestrator | Tuesday 31 March 2026 03:02:48 +0000 (0:00:00.236) 0:00:15.323 ********* 2026-03-31 03:02:51.519157 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:02:51.519164 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:02:51.519171 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:02:51.519178 | orchestrator | 2026-03-31 03:02:51.519184 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-31 03:02:51.519191 | orchestrator | Tuesday 31 March 2026 03:02:48 +0000 (0:00:00.343) 0:00:15.667 ********* 2026-03-31 03:02:51.519198 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:02:51.519205 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:02:51.519211 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:02:51.519218 | orchestrator | 2026-03-31 03:02:51.519225 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-31 03:02:51.519231 | orchestrator | Tuesday 31 March 2026 03:02:48 +0000 (0:00:00.340) 0:00:16.008 ********* 2026-03-31 03:02:51.519238 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:02:51.519245 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:02:51.519252 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:02:51.519258 | orchestrator | 2026-03-31 03:02:51.519271 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-31 03:02:51.519278 | orchestrator | Tuesday 31 March 2026 03:02:49 +0000 (0:00:00.641) 0:00:16.649 ********* 2026-03-31 03:02:51.519285 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:02:51.519291 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:02:51.519298 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:02:51.519305 | orchestrator | 2026-03-31 03:02:51.519312 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-31 03:02:51.519319 | orchestrator | Tuesday 31 March 2026 03:02:49 +0000 (0:00:00.363) 0:00:17.013 ********* 2026-03-31 03:02:51.519325 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:02:51.519377 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:02:51.519385 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:02:51.519392 | orchestrator | 2026-03-31 03:02:51.519399 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-31 03:02:51.519405 | orchestrator | Tuesday 31 March 2026 03:02:50 +0000 (0:00:00.340) 0:00:17.353 ********* 2026-03-31 03:02:51.519412 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:02:51.519419 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:02:51.519425 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:02:51.519432 | orchestrator | 2026-03-31 03:02:51.519439 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-31 03:02:51.519446 | orchestrator | Tuesday 31 March 2026 03:02:50 +0000 (0:00:00.569) 0:00:17.922 ********* 2026-03-31 03:02:51.519453 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:02:51.519460 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:02:51.519466 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:02:51.519473 | orchestrator | 2026-03-31 03:02:51.519479 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-31 03:02:51.519486 | orchestrator | Tuesday 31 March 2026 03:02:51 +0000 (0:00:00.363) 0:00:18.286 ********* 2026-03-31 03:02:51.519509 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dad98f55--09f4--5a2b--a5c7--aafce2660c53-osd--block--dad98f55--09f4--5a2b--a5c7--aafce2660c53', 'dm-uuid-LVM-3PGokd0XE9nIVZhiheUbxNcBNNscsDrxttbUQtJ3i25YBfd39yc024Mn1ftAcrtm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-31 03:02:51.519525 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--67174221--9040--517a--ae84--daf8ebd704d7-osd--block--67174221--9040--517a--ae84--daf8ebd704d7', 'dm-uuid-LVM-KejqHBdnFtLSyyC9R84nyz1yANxrpRIXzilsodjHoTjpW17LoAebYG18loNV682y'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-31 03:02:51.519534 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 03:02:51.519543 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 03:02:51.519556 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 03:02:51.519563 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 03:02:51.519570 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 03:02:51.519577 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 03:02:51.519584 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 03:02:51.519597 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 03:02:51.595237 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part1', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part14', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part15', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part16', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-31 03:02:51.595328 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb-osd--block--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb', 'dm-uuid-LVM-RwD1SDPPywNrcOLsCdJUWJCkPqisEw7IjN9YwlXbnLhNiiunicnne9TiGAxFnCN2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-31 03:02:51.595387 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--dad98f55--09f4--5a2b--a5c7--aafce2660c53-osd--block--dad98f55--09f4--5a2b--a5c7--aafce2660c53'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lFSq2g-b3FP-rBDh-oytj-DsQd-47zI-8ZR1ba', 'scsi-0QEMU_QEMU_HARDDISK_820fa545-b298-47e1-b072-447ef233e5c9', 'scsi-SQEMU_QEMU_HARDDISK_820fa545-b298-47e1-b072-447ef233e5c9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-31 03:02:51.595411 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--da0b55d5--13d5--528b--aee2--5667f342587c-osd--block--da0b55d5--13d5--528b--aee2--5667f342587c', 'dm-uuid-LVM-voIvMScBNf0nn1UqP6J3mrL57Feo8hpsEfbBIXBLL2lbnvB5fpXdf3Vs7Oc4nA8j'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-31 03:02:51.595422 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--67174221--9040--517a--ae84--daf8ebd704d7-osd--block--67174221--9040--517a--ae84--daf8ebd704d7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ysmeMC-hqe7-I7iJ-JTkz-gYYz-B5UB-UbMPzu', 'scsi-0QEMU_QEMU_HARDDISK_c466d3ef-6614-47a1-86d1-ef83336ce84c', 'scsi-SQEMU_QEMU_HARDDISK_c466d3ef-6614-47a1-86d1-ef83336ce84c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-31 03:02:51.595428 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 03:02:51.595442 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a878a648-90f8-45a8-8930-74e801ae2e4e', 'scsi-SQEMU_QEMU_HARDDISK_a878a648-90f8-45a8-8930-74e801ae2e4e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-31 03:02:51.595448 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 03:02:51.595453 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-31-01-38-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-31 03:02:51.595459 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 03:02:51.595464 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 03:02:51.595472 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 03:02:51.800570 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 03:02:51.800665 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 03:02:51.800697 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 03:02:51.800712 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part1', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part14', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part15', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part16', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-31 03:02:51.800725 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:02:51.800753 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb-osd--block--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jppFpT-6287-H5UX-wadw-idvL-aDwi-H3fsQH', 'scsi-0QEMU_QEMU_HARDDISK_627ac388-afe2-405e-bfb6-93a96eeb5247', 'scsi-SQEMU_QEMU_HARDDISK_627ac388-afe2-405e-bfb6-93a96eeb5247'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-31 03:02:51.800771 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--da0b55d5--13d5--528b--aee2--5667f342587c-osd--block--da0b55d5--13d5--528b--aee2--5667f342587c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-pfZnnD-Ultt-g92I-R3gj-okuR-Ezub-rBAf3f', 'scsi-0QEMU_QEMU_HARDDISK_aca90cda-810a-4a3a-a8a4-a9246b552814', 'scsi-SQEMU_QEMU_HARDDISK_aca90cda-810a-4a3a-a8a4-a9246b552814'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-31 03:02:51.800788 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a64e844-a251-4ee7-a817-d55da64d6351', 'scsi-SQEMU_QEMU_HARDDISK_5a64e844-a251-4ee7-a817-d55da64d6351'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-31 03:02:51.800799 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-31-01-38-47-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-31 03:02:51.800809 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:02:51.800819 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--07ced279--a583--5107--8220--95f80fc10ac7-osd--block--07ced279--a583--5107--8220--95f80fc10ac7', 'dm-uuid-LVM-4Lb9QdMZv1ai74sfHiNB7SWQCThlMxSwyKTWsVenR44CqY2klBeRO2fR5AXJ6GI1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-31 03:02:51.800829 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--185c377e--da3e--5428--98db--747be321d2f9-osd--block--185c377e--da3e--5428--98db--747be321d2f9', 'dm-uuid-LVM-x16wR0JSkJwOUat6KB2RjtOnd6k2ruBp3Senp6or7C3BHvrbv8KuFHdSdmwvdICC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-31 03:02:51.800839 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 03:02:51.800854 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 03:02:52.047731 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 03:02:52.047876 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 03:02:52.047895 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 03:02:52.047908 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 03:02:52.047939 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 03:02:52.047951 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-31 03:02:52.048005 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part1', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part14', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part15', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part16', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-31 03:02:52.048036 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--07ced279--a583--5107--8220--95f80fc10ac7-osd--block--07ced279--a583--5107--8220--95f80fc10ac7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bwm83I-k31i-pwme-XT9I-9Z0g-1hP0-CwgXOd', 'scsi-0QEMU_QEMU_HARDDISK_cee620fc-9fd6-4c5e-b237-9b955e0088ae', 'scsi-SQEMU_QEMU_HARDDISK_cee620fc-9fd6-4c5e-b237-9b955e0088ae'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-31 03:02:52.048050 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--185c377e--da3e--5428--98db--747be321d2f9-osd--block--185c377e--da3e--5428--98db--747be321d2f9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zgTsa4-r5F1-H4rU-9oqC-nOys-qaba-d4ei1Y', 'scsi-0QEMU_QEMU_HARDDISK_0036be6c-41d0-4a1c-804a-c8bed222bda7', 'scsi-SQEMU_QEMU_HARDDISK_0036be6c-41d0-4a1c-804a-c8bed222bda7'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-31 03:02:52.048064 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1382055-b12a-4a0d-90b0-6b0bf5b2002d', 'scsi-SQEMU_QEMU_HARDDISK_d1382055-b12a-4a0d-90b0-6b0bf5b2002d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-31 03:02:52.048078 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-31-01-38-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-31 03:02:52.048092 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:02:52.048107 | orchestrator | 2026-03-31 03:02:52.048120 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-31 03:02:52.048134 | orchestrator | Tuesday 31 March 2026 03:02:51 +0000 (0:00:00.654) 0:00:18.940 ********* 2026-03-31 03:02:52.048163 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dad98f55--09f4--5a2b--a5c7--aafce2660c53-osd--block--dad98f55--09f4--5a2b--a5c7--aafce2660c53', 'dm-uuid-LVM-3PGokd0XE9nIVZhiheUbxNcBNNscsDrxttbUQtJ3i25YBfd39yc024Mn1ftAcrtm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 03:02:52.217324 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--67174221--9040--517a--ae84--daf8ebd704d7-osd--block--67174221--9040--517a--ae84--daf8ebd704d7', 'dm-uuid-LVM-KejqHBdnFtLSyyC9R84nyz1yANxrpRIXzilsodjHoTjpW17LoAebYG18loNV682y'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 03:02:52.217459 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 03:02:52.217481 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 03:02:52.217497 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 03:02:52.217512 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 03:02:52.217527 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 03:02:52.217631 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 03:02:52.217651 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 03:02:52.217666 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb-osd--block--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb', 'dm-uuid-LVM-RwD1SDPPywNrcOLsCdJUWJCkPqisEw7IjN9YwlXbnLhNiiunicnne9TiGAxFnCN2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 03:02:52.217681 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 03:02:52.217697 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--da0b55d5--13d5--528b--aee2--5667f342587c-osd--block--da0b55d5--13d5--528b--aee2--5667f342587c', 'dm-uuid-LVM-voIvMScBNf0nn1UqP6J3mrL57Feo8hpsEfbBIXBLL2lbnvB5fpXdf3Vs7Oc4nA8j'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 03:02:52.217834 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part1', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part14', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part15', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part16', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 03:02:52.303948 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 03:02:52.304072 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--dad98f55--09f4--5a2b--a5c7--aafce2660c53-osd--block--dad98f55--09f4--5a2b--a5c7--aafce2660c53'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lFSq2g-b3FP-rBDh-oytj-DsQd-47zI-8ZR1ba', 'scsi-0QEMU_QEMU_HARDDISK_820fa545-b298-47e1-b072-447ef233e5c9', 'scsi-SQEMU_QEMU_HARDDISK_820fa545-b298-47e1-b072-447ef233e5c9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 03:02:52.304093 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 03:02:52.304148 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--67174221--9040--517a--ae84--daf8ebd704d7-osd--block--67174221--9040--517a--ae84--daf8ebd704d7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ysmeMC-hqe7-I7iJ-JTkz-gYYz-B5UB-UbMPzu', 'scsi-0QEMU_QEMU_HARDDISK_c466d3ef-6614-47a1-86d1-ef83336ce84c', 'scsi-SQEMU_QEMU_HARDDISK_c466d3ef-6614-47a1-86d1-ef83336ce84c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 03:02:52.304167 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 03:02:52.304208 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a878a648-90f8-45a8-8930-74e801ae2e4e', 'scsi-SQEMU_QEMU_HARDDISK_a878a648-90f8-45a8-8930-74e801ae2e4e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 03:02:52.304225 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 03:02:52.304242 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-31-01-38-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 03:02:52.304258 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:02:52.304275 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 03:02:52.304306 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 03:02:52.304323 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 03:02:52.304375 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--07ced279--a583--5107--8220--95f80fc10ac7-osd--block--07ced279--a583--5107--8220--95f80fc10ac7', 'dm-uuid-LVM-4Lb9QdMZv1ai74sfHiNB7SWQCThlMxSwyKTWsVenR44CqY2klBeRO2fR5AXJ6GI1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 03:02:52.400221 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 03:02:52.400311 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--185c377e--da3e--5428--98db--747be321d2f9-osd--block--185c377e--da3e--5428--98db--747be321d2f9', 'dm-uuid-LVM-x16wR0JSkJwOUat6KB2RjtOnd6k2ruBp3Senp6or7C3BHvrbv8KuFHdSdmwvdICC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 03:02:52.400382 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part1', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part14', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part15', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part16', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 03:02:52.400411 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 03:02:52.400420 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb-osd--block--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jppFpT-6287-H5UX-wadw-idvL-aDwi-H3fsQH', 'scsi-0QEMU_QEMU_HARDDISK_627ac388-afe2-405e-bfb6-93a96eeb5247', 'scsi-SQEMU_QEMU_HARDDISK_627ac388-afe2-405e-bfb6-93a96eeb5247'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 03:02:52.400429 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 03:02:52.400453 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--da0b55d5--13d5--528b--aee2--5667f342587c-osd--block--da0b55d5--13d5--528b--aee2--5667f342587c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-pfZnnD-Ultt-g92I-R3gj-okuR-Ezub-rBAf3f', 'scsi-0QEMU_QEMU_HARDDISK_aca90cda-810a-4a3a-a8a4-a9246b552814', 'scsi-SQEMU_QEMU_HARDDISK_aca90cda-810a-4a3a-a8a4-a9246b552814'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 03:02:52.400462 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 03:02:52.400476 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a64e844-a251-4ee7-a817-d55da64d6351', 'scsi-SQEMU_QEMU_HARDDISK_5a64e844-a251-4ee7-a817-d55da64d6351'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 03:02:52.551164 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 03:02:52.551301 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-31-01-38-47-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 03:02:52.551472 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 03:02:52.551503 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:02:52.551550 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 03:02:52.551574 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 03:02:52.551594 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 03:02:52.551654 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part1', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part14', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part15', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part16', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 03:02:52.551707 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--07ced279--a583--5107--8220--95f80fc10ac7-osd--block--07ced279--a583--5107--8220--95f80fc10ac7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bwm83I-k31i-pwme-XT9I-9Z0g-1hP0-CwgXOd', 'scsi-0QEMU_QEMU_HARDDISK_cee620fc-9fd6-4c5e-b237-9b955e0088ae', 'scsi-SQEMU_QEMU_HARDDISK_cee620fc-9fd6-4c5e-b237-9b955e0088ae'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 03:02:52.551731 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--185c377e--da3e--5428--98db--747be321d2f9-osd--block--185c377e--da3e--5428--98db--747be321d2f9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zgTsa4-r5F1-H4rU-9oqC-nOys-qaba-d4ei1Y', 'scsi-0QEMU_QEMU_HARDDISK_0036be6c-41d0-4a1c-804a-c8bed222bda7', 'scsi-SQEMU_QEMU_HARDDISK_0036be6c-41d0-4a1c-804a-c8bed222bda7'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 03:02:52.551767 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1382055-b12a-4a0d-90b0-6b0bf5b2002d', 'scsi-SQEMU_QEMU_HARDDISK_d1382055-b12a-4a0d-90b0-6b0bf5b2002d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 03:03:05.221119 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-31-01-38-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-31 03:03:05.221264 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:03:05.221282 | orchestrator | 2026-03-31 03:03:05.221295 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-31 03:03:05.221308 | orchestrator | Tuesday 31 March 2026 03:02:52 +0000 (0:00:00.611) 0:00:19.552 ********* 2026-03-31 03:03:05.221320 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:03:05.221332 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:03:05.221417 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:03:05.221433 | orchestrator | 2026-03-31 03:03:05.221444 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-31 03:03:05.221456 | orchestrator | Tuesday 31 March 2026 03:02:53 +0000 (0:00:00.915) 0:00:20.468 ********* 2026-03-31 03:03:05.221467 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:03:05.221478 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:03:05.221488 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:03:05.221500 | orchestrator | 2026-03-31 03:03:05.221511 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-31 03:03:05.221521 | orchestrator | Tuesday 31 March 2026 03:02:53 +0000 (0:00:00.345) 0:00:20.814 ********* 2026-03-31 03:03:05.221534 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:03:05.221560 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:03:05.221572 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:03:05.221584 | orchestrator | 2026-03-31 03:03:05.221594 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-31 03:03:05.221605 | orchestrator | Tuesday 31 March 2026 03:02:54 +0000 (0:00:00.674) 0:00:21.488 ********* 2026-03-31 03:03:05.221618 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:03:05.221629 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:03:05.221639 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:03:05.221655 | orchestrator | 2026-03-31 03:03:05.221668 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-31 03:03:05.221679 | orchestrator | Tuesday 31 March 2026 03:02:54 +0000 (0:00:00.324) 0:00:21.813 ********* 2026-03-31 03:03:05.221691 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:03:05.221704 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:03:05.221717 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:03:05.221730 | orchestrator | 2026-03-31 03:03:05.221740 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-31 03:03:05.221753 | orchestrator | Tuesday 31 March 2026 03:02:55 +0000 (0:00:00.755) 0:00:22.568 ********* 2026-03-31 03:03:05.221764 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:03:05.221776 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:03:05.221789 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:03:05.221800 | orchestrator | 2026-03-31 03:03:05.221811 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-31 03:03:05.221824 | orchestrator | Tuesday 31 March 2026 03:02:55 +0000 (0:00:00.344) 0:00:22.913 ********* 2026-03-31 03:03:05.221837 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-31 03:03:05.221848 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-31 03:03:05.221863 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-31 03:03:05.221876 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-31 03:03:05.221888 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-31 03:03:05.221912 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-31 03:03:05.221925 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-31 03:03:05.221936 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-31 03:03:05.221948 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-31 03:03:05.221961 | orchestrator | 2026-03-31 03:03:05.221974 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-31 03:03:05.221988 | orchestrator | Tuesday 31 March 2026 03:02:57 +0000 (0:00:01.172) 0:00:24.085 ********* 2026-03-31 03:03:05.222000 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-31 03:03:05.222074 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-31 03:03:05.222091 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-31 03:03:05.222103 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:03:05.222115 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-31 03:03:05.222126 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-31 03:03:05.222136 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-31 03:03:05.222147 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:03:05.222157 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-31 03:03:05.222167 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-31 03:03:05.222178 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-31 03:03:05.222188 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:03:05.222198 | orchestrator | 2026-03-31 03:03:05.222208 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-31 03:03:05.222218 | orchestrator | Tuesday 31 March 2026 03:02:57 +0000 (0:00:00.395) 0:00:24.481 ********* 2026-03-31 03:03:05.222249 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 03:03:05.222261 | orchestrator | 2026-03-31 03:03:05.222271 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-31 03:03:05.222285 | orchestrator | Tuesday 31 March 2026 03:02:58 +0000 (0:00:00.799) 0:00:25.281 ********* 2026-03-31 03:03:05.222296 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:03:05.222307 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:03:05.222318 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:03:05.222330 | orchestrator | 2026-03-31 03:03:05.222342 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-31 03:03:05.222375 | orchestrator | Tuesday 31 March 2026 03:02:58 +0000 (0:00:00.384) 0:00:25.665 ********* 2026-03-31 03:03:05.222385 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:03:05.222396 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:03:05.222407 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:03:05.222417 | orchestrator | 2026-03-31 03:03:05.222429 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-31 03:03:05.222440 | orchestrator | Tuesday 31 March 2026 03:02:58 +0000 (0:00:00.343) 0:00:26.008 ********* 2026-03-31 03:03:05.222451 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:03:05.222462 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:03:05.222474 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:03:05.222485 | orchestrator | 2026-03-31 03:03:05.222495 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-31 03:03:05.222506 | orchestrator | Tuesday 31 March 2026 03:02:59 +0000 (0:00:00.604) 0:00:26.613 ********* 2026-03-31 03:03:05.222518 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:03:05.222530 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:03:05.222541 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:03:05.222552 | orchestrator | 2026-03-31 03:03:05.222564 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-31 03:03:05.222575 | orchestrator | Tuesday 31 March 2026 03:03:00 +0000 (0:00:00.503) 0:00:27.117 ********* 2026-03-31 03:03:05.222597 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-31 03:03:05.222615 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-31 03:03:05.222625 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-31 03:03:05.222636 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:03:05.222646 | orchestrator | 2026-03-31 03:03:05.222656 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-31 03:03:05.222666 | orchestrator | Tuesday 31 March 2026 03:03:00 +0000 (0:00:00.401) 0:00:27.518 ********* 2026-03-31 03:03:05.222677 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-31 03:03:05.222688 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-31 03:03:05.222698 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-31 03:03:05.222708 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:03:05.222719 | orchestrator | 2026-03-31 03:03:05.222729 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-31 03:03:05.222739 | orchestrator | Tuesday 31 March 2026 03:03:00 +0000 (0:00:00.454) 0:00:27.973 ********* 2026-03-31 03:03:05.222750 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-31 03:03:05.222760 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-31 03:03:05.222770 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-31 03:03:05.222780 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:03:05.222791 | orchestrator | 2026-03-31 03:03:05.222800 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-31 03:03:05.222811 | orchestrator | Tuesday 31 March 2026 03:03:01 +0000 (0:00:00.430) 0:00:28.403 ********* 2026-03-31 03:03:05.222822 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:03:05.222832 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:03:05.222842 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:03:05.222852 | orchestrator | 2026-03-31 03:03:05.222863 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-31 03:03:05.222874 | orchestrator | Tuesday 31 March 2026 03:03:01 +0000 (0:00:00.373) 0:00:28.776 ********* 2026-03-31 03:03:05.222884 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-31 03:03:05.222894 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-31 03:03:05.222904 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-31 03:03:05.222914 | orchestrator | 2026-03-31 03:03:05.222925 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-31 03:03:05.222935 | orchestrator | Tuesday 31 March 2026 03:03:02 +0000 (0:00:00.826) 0:00:29.603 ********* 2026-03-31 03:03:05.222945 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 03:03:05.222955 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 03:03:05.222966 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 03:03:05.222976 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-31 03:03:05.222986 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-31 03:03:05.222996 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-31 03:03:05.223007 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-31 03:03:05.223018 | orchestrator | 2026-03-31 03:03:05.223028 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-31 03:03:05.223038 | orchestrator | Tuesday 31 March 2026 03:03:03 +0000 (0:00:00.876) 0:00:30.480 ********* 2026-03-31 03:03:05.223049 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 03:03:05.223070 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 03:04:46.901006 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 03:04:46.901180 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-31 03:04:46.901197 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-31 03:04:46.901209 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-31 03:04:46.901220 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-31 03:04:46.901231 | orchestrator | 2026-03-31 03:04:46.901243 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-03-31 03:04:46.901254 | orchestrator | Tuesday 31 March 2026 03:03:05 +0000 (0:00:01.737) 0:00:32.218 ********* 2026-03-31 03:04:46.901265 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:04:46.901277 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:04:46.901288 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-03-31 03:04:46.901299 | orchestrator | 2026-03-31 03:04:46.901310 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-03-31 03:04:46.901321 | orchestrator | Tuesday 31 March 2026 03:03:05 +0000 (0:00:00.399) 0:00:32.618 ********* 2026-03-31 03:04:46.901334 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-31 03:04:46.901347 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-31 03:04:46.901373 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-31 03:04:46.901385 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-31 03:04:46.901396 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-31 03:04:46.901407 | orchestrator | 2026-03-31 03:04:46.901450 | orchestrator | TASK [generate keys] *********************************************************** 2026-03-31 03:04:46.901462 | orchestrator | Tuesday 31 March 2026 03:03:51 +0000 (0:00:45.904) 0:01:18.522 ********* 2026-03-31 03:04:46.901472 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 03:04:46.901483 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 03:04:46.901494 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 03:04:46.901504 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 03:04:46.901515 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 03:04:46.901528 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 03:04:46.901545 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-03-31 03:04:46.901563 | orchestrator | 2026-03-31 03:04:46.901581 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-03-31 03:04:46.901600 | orchestrator | Tuesday 31 March 2026 03:04:15 +0000 (0:00:24.486) 0:01:43.009 ********* 2026-03-31 03:04:46.901632 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 03:04:46.901648 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 03:04:46.901661 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 03:04:46.901673 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 03:04:46.901687 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 03:04:46.901699 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 03:04:46.901712 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-31 03:04:46.901724 | orchestrator | 2026-03-31 03:04:46.901736 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-03-31 03:04:46.901747 | orchestrator | Tuesday 31 March 2026 03:04:28 +0000 (0:00:12.890) 0:01:55.899 ********* 2026-03-31 03:04:46.901758 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 03:04:46.901787 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-31 03:04:46.901799 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-31 03:04:46.901810 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 03:04:46.901821 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-31 03:04:46.901832 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-31 03:04:46.901843 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 03:04:46.901853 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-31 03:04:46.901868 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-31 03:04:46.901886 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 03:04:46.901898 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-31 03:04:46.901909 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-31 03:04:46.901920 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 03:04:46.901930 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-31 03:04:46.901941 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-31 03:04:46.901952 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 03:04:46.901963 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-31 03:04:46.901973 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-31 03:04:46.901984 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-03-31 03:04:46.901995 | orchestrator | 2026-03-31 03:04:46.902012 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 03:04:46.902085 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-31 03:04:46.902099 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-31 03:04:46.902111 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-31 03:04:46.902122 | orchestrator | 2026-03-31 03:04:46.902133 | orchestrator | 2026-03-31 03:04:46.902144 | orchestrator | 2026-03-31 03:04:46.902155 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 03:04:46.902193 | orchestrator | Tuesday 31 March 2026 03:04:46 +0000 (0:00:17.576) 0:02:13.476 ********* 2026-03-31 03:04:46.902205 | orchestrator | =============================================================================== 2026-03-31 03:04:46.902216 | orchestrator | create openstack pool(s) ----------------------------------------------- 45.90s 2026-03-31 03:04:46.902227 | orchestrator | generate keys ---------------------------------------------------------- 24.49s 2026-03-31 03:04:46.902238 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.58s 2026-03-31 03:04:46.902248 | orchestrator | get keys from monitors ------------------------------------------------- 12.89s 2026-03-31 03:04:46.902260 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.23s 2026-03-31 03:04:46.902271 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.77s 2026-03-31 03:04:46.902281 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.74s 2026-03-31 03:04:46.902292 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 1.17s 2026-03-31 03:04:46.902303 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 1.12s 2026-03-31 03:04:46.902314 | orchestrator | ceph-facts : Get current fsid ------------------------------------------- 0.97s 2026-03-31 03:04:46.902325 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.92s 2026-03-31 03:04:46.902335 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.90s 2026-03-31 03:04:46.902346 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.88s 2026-03-31 03:04:46.902357 | orchestrator | ceph-facts : Set_fact rgw_instances ------------------------------------- 0.83s 2026-03-31 03:04:46.902368 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.80s 2026-03-31 03:04:46.902378 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.79s 2026-03-31 03:04:46.902470 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.76s 2026-03-31 03:04:46.902486 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.71s 2026-03-31 03:04:46.902497 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.70s 2026-03-31 03:04:46.902508 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.69s 2026-03-31 03:04:49.406554 | orchestrator | 2026-03-31 03:04:49 | INFO  | Task 4eb8a422-fc62-405e-8426-a4df5cca1eff (copy-ceph-keys) was prepared for execution. 2026-03-31 03:04:49.406644 | orchestrator | 2026-03-31 03:04:49 | INFO  | It takes a moment until task 4eb8a422-fc62-405e-8426-a4df5cca1eff (copy-ceph-keys) has been started and output is visible here. 2026-03-31 03:05:29.688770 | orchestrator | 2026-03-31 03:05:29.688882 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-03-31 03:05:29.688898 | orchestrator | 2026-03-31 03:05:29.688909 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-03-31 03:05:29.688925 | orchestrator | Tuesday 31 March 2026 03:04:53 +0000 (0:00:00.172) 0:00:00.172 ********* 2026-03-31 03:05:29.688941 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-31 03:05:29.688960 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-31 03:05:29.688984 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-31 03:05:29.688999 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-31 03:05:29.689016 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-31 03:05:29.689031 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-31 03:05:29.689046 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-31 03:05:29.689093 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-31 03:05:29.689109 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-31 03:05:29.689123 | orchestrator | 2026-03-31 03:05:29.689137 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-03-31 03:05:29.689153 | orchestrator | Tuesday 31 March 2026 03:04:58 +0000 (0:00:04.965) 0:00:05.137 ********* 2026-03-31 03:05:29.689187 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-31 03:05:29.689203 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-31 03:05:29.689219 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-31 03:05:29.689233 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-31 03:05:29.689247 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-31 03:05:29.689261 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-31 03:05:29.689276 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-31 03:05:29.689292 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-31 03:05:29.689308 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-31 03:05:29.689324 | orchestrator | 2026-03-31 03:05:29.689342 | orchestrator | TASK [Create share directory] ************************************************** 2026-03-31 03:05:29.689358 | orchestrator | Tuesday 31 March 2026 03:05:03 +0000 (0:00:04.469) 0:00:09.607 ********* 2026-03-31 03:05:29.689376 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-31 03:05:29.689392 | orchestrator | 2026-03-31 03:05:29.689408 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-03-31 03:05:29.689525 | orchestrator | Tuesday 31 March 2026 03:05:04 +0000 (0:00:01.009) 0:00:10.617 ********* 2026-03-31 03:05:29.689547 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-03-31 03:05:29.689566 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-31 03:05:29.689583 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-31 03:05:29.689624 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-03-31 03:05:29.689671 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-31 03:05:29.689704 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-03-31 03:05:29.689719 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-03-31 03:05:29.689734 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-03-31 03:05:29.689750 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-03-31 03:05:29.689765 | orchestrator | 2026-03-31 03:05:29.689782 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-03-31 03:05:29.689797 | orchestrator | Tuesday 31 March 2026 03:05:18 +0000 (0:00:14.275) 0:00:24.893 ********* 2026-03-31 03:05:29.689812 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-03-31 03:05:29.689826 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-03-31 03:05:29.689842 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-31 03:05:29.689857 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-31 03:05:29.689922 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-31 03:05:29.689943 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-31 03:05:29.689960 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-03-31 03:05:29.689976 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-03-31 03:05:29.689994 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-03-31 03:05:29.690009 | orchestrator | 2026-03-31 03:05:29.690081 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-03-31 03:05:29.690092 | orchestrator | Tuesday 31 March 2026 03:05:21 +0000 (0:00:03.223) 0:00:28.116 ********* 2026-03-31 03:05:29.690103 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-03-31 03:05:29.690112 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-31 03:05:29.690122 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-31 03:05:29.690131 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-03-31 03:05:29.690141 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-31 03:05:29.690151 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-03-31 03:05:29.690168 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-03-31 03:05:29.690186 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-03-31 03:05:29.690202 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-03-31 03:05:29.690220 | orchestrator | 2026-03-31 03:05:29.690247 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 03:05:29.690264 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 03:05:29.690282 | orchestrator | 2026-03-31 03:05:29.690299 | orchestrator | 2026-03-31 03:05:29.690316 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 03:05:29.690334 | orchestrator | Tuesday 31 March 2026 03:05:29 +0000 (0:00:07.451) 0:00:35.567 ********* 2026-03-31 03:05:29.690353 | orchestrator | =============================================================================== 2026-03-31 03:05:29.690370 | orchestrator | Write ceph keys to the share directory --------------------------------- 14.28s 2026-03-31 03:05:29.690383 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.45s 2026-03-31 03:05:29.690392 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.97s 2026-03-31 03:05:29.690401 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.47s 2026-03-31 03:05:29.690411 | orchestrator | Check if target directories exist --------------------------------------- 3.22s 2026-03-31 03:05:29.690447 | orchestrator | Create share directory -------------------------------------------------- 1.01s 2026-03-31 03:05:42.141626 | orchestrator | 2026-03-31 03:05:42 | INFO  | Task 1b488326-35ad-4676-adf6-aa64fad7c05d (cephclient) was prepared for execution. 2026-03-31 03:05:42.141796 | orchestrator | 2026-03-31 03:05:42 | INFO  | It takes a moment until task 1b488326-35ad-4676-adf6-aa64fad7c05d (cephclient) has been started and output is visible here. 2026-03-31 03:06:46.235136 | orchestrator | 2026-03-31 03:06:46.235238 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-03-31 03:06:46.235253 | orchestrator | 2026-03-31 03:06:46.235263 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-03-31 03:06:46.235272 | orchestrator | Tuesday 31 March 2026 03:05:46 +0000 (0:00:00.255) 0:00:00.255 ********* 2026-03-31 03:06:46.235282 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-03-31 03:06:46.235318 | orchestrator | 2026-03-31 03:06:46.235328 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-03-31 03:06:46.235337 | orchestrator | Tuesday 31 March 2026 03:05:46 +0000 (0:00:00.274) 0:00:00.530 ********* 2026-03-31 03:06:46.235346 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-03-31 03:06:46.235355 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-03-31 03:06:46.235365 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-03-31 03:06:46.235374 | orchestrator | 2026-03-31 03:06:46.235383 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-03-31 03:06:46.235392 | orchestrator | Tuesday 31 March 2026 03:05:48 +0000 (0:00:01.311) 0:00:01.841 ********* 2026-03-31 03:06:46.235401 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-03-31 03:06:46.235410 | orchestrator | 2026-03-31 03:06:46.235419 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-03-31 03:06:46.235428 | orchestrator | Tuesday 31 March 2026 03:05:49 +0000 (0:00:01.520) 0:00:03.362 ********* 2026-03-31 03:06:46.235436 | orchestrator | changed: [testbed-manager] 2026-03-31 03:06:46.235445 | orchestrator | 2026-03-31 03:06:46.235454 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-03-31 03:06:46.235540 | orchestrator | Tuesday 31 March 2026 03:05:50 +0000 (0:00:00.987) 0:00:04.349 ********* 2026-03-31 03:06:46.235550 | orchestrator | changed: [testbed-manager] 2026-03-31 03:06:46.235558 | orchestrator | 2026-03-31 03:06:46.235567 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-03-31 03:06:46.235576 | orchestrator | Tuesday 31 March 2026 03:05:51 +0000 (0:00:00.972) 0:00:05.322 ********* 2026-03-31 03:06:46.235585 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-03-31 03:06:46.235593 | orchestrator | ok: [testbed-manager] 2026-03-31 03:06:46.235602 | orchestrator | 2026-03-31 03:06:46.235611 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-03-31 03:06:46.235620 | orchestrator | Tuesday 31 March 2026 03:06:35 +0000 (0:00:43.752) 0:00:49.074 ********* 2026-03-31 03:06:46.235628 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-03-31 03:06:46.235638 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-03-31 03:06:46.235646 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-03-31 03:06:46.235655 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-03-31 03:06:46.235664 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-03-31 03:06:46.235673 | orchestrator | 2026-03-31 03:06:46.235682 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-03-31 03:06:46.235692 | orchestrator | Tuesday 31 March 2026 03:06:39 +0000 (0:00:04.319) 0:00:53.394 ********* 2026-03-31 03:06:46.235703 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-03-31 03:06:46.235713 | orchestrator | 2026-03-31 03:06:46.235723 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-03-31 03:06:46.235733 | orchestrator | Tuesday 31 March 2026 03:06:40 +0000 (0:00:00.528) 0:00:53.922 ********* 2026-03-31 03:06:46.235743 | orchestrator | skipping: [testbed-manager] 2026-03-31 03:06:46.235752 | orchestrator | 2026-03-31 03:06:46.235762 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-03-31 03:06:46.235772 | orchestrator | Tuesday 31 March 2026 03:06:40 +0000 (0:00:00.157) 0:00:54.079 ********* 2026-03-31 03:06:46.235782 | orchestrator | skipping: [testbed-manager] 2026-03-31 03:06:46.235792 | orchestrator | 2026-03-31 03:06:46.235801 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-03-31 03:06:46.235823 | orchestrator | Tuesday 31 March 2026 03:06:41 +0000 (0:00:00.541) 0:00:54.621 ********* 2026-03-31 03:06:46.235834 | orchestrator | changed: [testbed-manager] 2026-03-31 03:06:46.235856 | orchestrator | 2026-03-31 03:06:46.235867 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-03-31 03:06:46.235875 | orchestrator | Tuesday 31 March 2026 03:06:42 +0000 (0:00:01.868) 0:00:56.490 ********* 2026-03-31 03:06:46.235884 | orchestrator | changed: [testbed-manager] 2026-03-31 03:06:46.235893 | orchestrator | 2026-03-31 03:06:46.235901 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-03-31 03:06:46.235910 | orchestrator | Tuesday 31 March 2026 03:06:43 +0000 (0:00:00.727) 0:00:57.217 ********* 2026-03-31 03:06:46.235919 | orchestrator | changed: [testbed-manager] 2026-03-31 03:06:46.235927 | orchestrator | 2026-03-31 03:06:46.235936 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-03-31 03:06:46.235945 | orchestrator | Tuesday 31 March 2026 03:06:44 +0000 (0:00:00.630) 0:00:57.848 ********* 2026-03-31 03:06:46.235953 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-03-31 03:06:46.235962 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-03-31 03:06:46.235971 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-03-31 03:06:46.235979 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-03-31 03:06:46.235989 | orchestrator | 2026-03-31 03:06:46.235997 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 03:06:46.236006 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 03:06:46.236016 | orchestrator | 2026-03-31 03:06:46.236025 | orchestrator | 2026-03-31 03:06:46.236048 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 03:06:46.236058 | orchestrator | Tuesday 31 March 2026 03:06:45 +0000 (0:00:01.587) 0:00:59.436 ********* 2026-03-31 03:06:46.236067 | orchestrator | =============================================================================== 2026-03-31 03:06:46.236075 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 43.75s 2026-03-31 03:06:46.236084 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.32s 2026-03-31 03:06:46.236093 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.87s 2026-03-31 03:06:46.236102 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.59s 2026-03-31 03:06:46.236110 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.52s 2026-03-31 03:06:46.236119 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.31s 2026-03-31 03:06:46.236128 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.99s 2026-03-31 03:06:46.236136 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.97s 2026-03-31 03:06:46.236145 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.73s 2026-03-31 03:06:46.236153 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.63s 2026-03-31 03:06:46.236162 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.54s 2026-03-31 03:06:46.236171 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.53s 2026-03-31 03:06:46.236179 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.27s 2026-03-31 03:06:46.236188 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.16s 2026-03-31 03:06:48.779992 | orchestrator | 2026-03-31 03:06:48 | INFO  | Task 29dd879a-a096-44a0-85f4-fa23f1f17494 (ceph-bootstrap-dashboard) was prepared for execution. 2026-03-31 03:06:48.780078 | orchestrator | 2026-03-31 03:06:48 | INFO  | It takes a moment until task 29dd879a-a096-44a0-85f4-fa23f1f17494 (ceph-bootstrap-dashboard) has been started and output is visible here. 2026-03-31 03:08:19.502684 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-31 03:08:19.502771 | orchestrator | 2.16.14 2026-03-31 03:08:19.502780 | orchestrator | 2026-03-31 03:08:19.502786 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-03-31 03:08:19.502809 | orchestrator | 2026-03-31 03:08:19.502814 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-03-31 03:08:19.502820 | orchestrator | Tuesday 31 March 2026 03:06:53 +0000 (0:00:00.294) 0:00:00.294 ********* 2026-03-31 03:08:19.502825 | orchestrator | changed: [testbed-manager] 2026-03-31 03:08:19.502830 | orchestrator | 2026-03-31 03:08:19.502835 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-03-31 03:08:19.502839 | orchestrator | Tuesday 31 March 2026 03:06:54 +0000 (0:00:01.419) 0:00:01.713 ********* 2026-03-31 03:08:19.502844 | orchestrator | changed: [testbed-manager] 2026-03-31 03:08:19.502848 | orchestrator | 2026-03-31 03:08:19.502853 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-03-31 03:08:19.502858 | orchestrator | Tuesday 31 March 2026 03:06:55 +0000 (0:00:01.086) 0:00:02.800 ********* 2026-03-31 03:08:19.502875 | orchestrator | changed: [testbed-manager] 2026-03-31 03:08:19.502882 | orchestrator | 2026-03-31 03:08:19.502889 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-03-31 03:08:19.502908 | orchestrator | Tuesday 31 March 2026 03:06:56 +0000 (0:00:01.110) 0:00:03.910 ********* 2026-03-31 03:08:19.502917 | orchestrator | changed: [testbed-manager] 2026-03-31 03:08:19.502924 | orchestrator | 2026-03-31 03:08:19.502932 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-03-31 03:08:19.502939 | orchestrator | Tuesday 31 March 2026 03:06:58 +0000 (0:00:01.258) 0:00:05.169 ********* 2026-03-31 03:08:19.502945 | orchestrator | changed: [testbed-manager] 2026-03-31 03:08:19.502952 | orchestrator | 2026-03-31 03:08:19.502974 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-03-31 03:08:19.502982 | orchestrator | Tuesday 31 March 2026 03:06:59 +0000 (0:00:01.175) 0:00:06.344 ********* 2026-03-31 03:08:19.502990 | orchestrator | changed: [testbed-manager] 2026-03-31 03:08:19.502997 | orchestrator | 2026-03-31 03:08:19.503004 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-03-31 03:08:19.503012 | orchestrator | Tuesday 31 March 2026 03:07:00 +0000 (0:00:01.193) 0:00:07.538 ********* 2026-03-31 03:08:19.503018 | orchestrator | changed: [testbed-manager] 2026-03-31 03:08:19.503026 | orchestrator | 2026-03-31 03:08:19.503033 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-03-31 03:08:19.503040 | orchestrator | Tuesday 31 March 2026 03:07:02 +0000 (0:00:02.091) 0:00:09.630 ********* 2026-03-31 03:08:19.503049 | orchestrator | changed: [testbed-manager] 2026-03-31 03:08:19.503054 | orchestrator | 2026-03-31 03:08:19.503058 | orchestrator | TASK [Create admin user] ******************************************************* 2026-03-31 03:08:19.503063 | orchestrator | Tuesday 31 March 2026 03:07:03 +0000 (0:00:01.232) 0:00:10.862 ********* 2026-03-31 03:08:19.503067 | orchestrator | changed: [testbed-manager] 2026-03-31 03:08:19.503072 | orchestrator | 2026-03-31 03:08:19.503077 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-03-31 03:08:19.503082 | orchestrator | Tuesday 31 March 2026 03:07:54 +0000 (0:00:50.506) 0:01:01.369 ********* 2026-03-31 03:08:19.503087 | orchestrator | skipping: [testbed-manager] 2026-03-31 03:08:19.503092 | orchestrator | 2026-03-31 03:08:19.503097 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-31 03:08:19.503106 | orchestrator | 2026-03-31 03:08:19.503114 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-31 03:08:19.503122 | orchestrator | Tuesday 31 March 2026 03:07:54 +0000 (0:00:00.189) 0:01:01.558 ********* 2026-03-31 03:08:19.503131 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:08:19.503138 | orchestrator | 2026-03-31 03:08:19.503147 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-31 03:08:19.503155 | orchestrator | 2026-03-31 03:08:19.503164 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-31 03:08:19.503172 | orchestrator | Tuesday 31 March 2026 03:08:06 +0000 (0:00:11.836) 0:01:13.395 ********* 2026-03-31 03:08:19.503190 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:08:19.503199 | orchestrator | 2026-03-31 03:08:19.503208 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-31 03:08:19.503217 | orchestrator | 2026-03-31 03:08:19.503223 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-31 03:08:19.503228 | orchestrator | Tuesday 31 March 2026 03:08:07 +0000 (0:00:01.324) 0:01:14.719 ********* 2026-03-31 03:08:19.503233 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:08:19.503238 | orchestrator | 2026-03-31 03:08:19.503245 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 03:08:19.503252 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-31 03:08:19.503260 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 03:08:19.503266 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 03:08:19.503273 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 03:08:19.503278 | orchestrator | 2026-03-31 03:08:19.503284 | orchestrator | 2026-03-31 03:08:19.503290 | orchestrator | 2026-03-31 03:08:19.503296 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 03:08:19.503302 | orchestrator | Tuesday 31 March 2026 03:08:19 +0000 (0:00:11.341) 0:01:26.061 ********* 2026-03-31 03:08:19.503308 | orchestrator | =============================================================================== 2026-03-31 03:08:19.503314 | orchestrator | Create admin user ------------------------------------------------------ 50.51s 2026-03-31 03:08:19.503335 | orchestrator | Restart ceph manager service ------------------------------------------- 24.50s 2026-03-31 03:08:19.503342 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.09s 2026-03-31 03:08:19.503349 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.42s 2026-03-31 03:08:19.503355 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.26s 2026-03-31 03:08:19.503361 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.23s 2026-03-31 03:08:19.503367 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.19s 2026-03-31 03:08:19.503373 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.18s 2026-03-31 03:08:19.503386 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.11s 2026-03-31 03:08:19.503392 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.09s 2026-03-31 03:08:19.503398 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.19s 2026-03-31 03:08:19.847475 | orchestrator | + sh -c /opt/configuration/scripts/deploy/300-openstack.sh 2026-03-31 03:08:21.958377 | orchestrator | 2026-03-31 03:08:21 | INFO  | Task a6343215-a2a9-49d6-a432-17bf0eaadc6f (keystone) was prepared for execution. 2026-03-31 03:08:21.958497 | orchestrator | 2026-03-31 03:08:21 | INFO  | It takes a moment until task a6343215-a2a9-49d6-a432-17bf0eaadc6f (keystone) has been started and output is visible here. 2026-03-31 03:08:29.526332 | orchestrator | 2026-03-31 03:08:29.526445 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-31 03:08:29.526461 | orchestrator | 2026-03-31 03:08:29.526491 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-31 03:08:29.526504 | orchestrator | Tuesday 31 March 2026 03:08:26 +0000 (0:00:00.274) 0:00:00.274 ********* 2026-03-31 03:08:29.526595 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:08:29.526610 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:08:29.526621 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:08:29.526632 | orchestrator | 2026-03-31 03:08:29.526644 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-31 03:08:29.526675 | orchestrator | Tuesday 31 March 2026 03:08:26 +0000 (0:00:00.336) 0:00:00.611 ********* 2026-03-31 03:08:29.526686 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-31 03:08:29.526697 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-31 03:08:29.526708 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-31 03:08:29.526719 | orchestrator | 2026-03-31 03:08:29.526730 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-03-31 03:08:29.526741 | orchestrator | 2026-03-31 03:08:29.526752 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-31 03:08:29.526763 | orchestrator | Tuesday 31 March 2026 03:08:27 +0000 (0:00:00.470) 0:00:01.082 ********* 2026-03-31 03:08:29.526774 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 03:08:29.526786 | orchestrator | 2026-03-31 03:08:29.526797 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-03-31 03:08:29.526808 | orchestrator | Tuesday 31 March 2026 03:08:27 +0000 (0:00:00.580) 0:00:01.662 ********* 2026-03-31 03:08:29.526827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-31 03:08:29.526844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-31 03:08:29.526885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-31 03:08:29.526919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-31 03:08:29.526941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-31 03:08:29.526961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-31 03:08:29.526980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-31 03:08:29.526999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-31 03:08:29.527019 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-31 03:08:29.527050 | orchestrator | 2026-03-31 03:08:29.527072 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-03-31 03:08:29.527105 | orchestrator | Tuesday 31 March 2026 03:08:29 +0000 (0:00:01.754) 0:00:03.416 ********* 2026-03-31 03:08:35.569947 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:08:35.570102 | orchestrator | 2026-03-31 03:08:35.570133 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-03-31 03:08:35.570144 | orchestrator | Tuesday 31 March 2026 03:08:29 +0000 (0:00:00.309) 0:00:03.726 ********* 2026-03-31 03:08:35.570154 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:08:35.570163 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:08:35.570172 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:08:35.570181 | orchestrator | 2026-03-31 03:08:35.570190 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-03-31 03:08:35.570199 | orchestrator | Tuesday 31 March 2026 03:08:30 +0000 (0:00:00.322) 0:00:04.048 ********* 2026-03-31 03:08:35.570209 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-31 03:08:35.570217 | orchestrator | 2026-03-31 03:08:35.570226 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-31 03:08:35.570235 | orchestrator | Tuesday 31 March 2026 03:08:31 +0000 (0:00:00.877) 0:00:04.926 ********* 2026-03-31 03:08:35.570244 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 03:08:35.570254 | orchestrator | 2026-03-31 03:08:35.570276 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-03-31 03:08:35.570285 | orchestrator | Tuesday 31 March 2026 03:08:31 +0000 (0:00:00.569) 0:00:05.496 ********* 2026-03-31 03:08:35.570310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-31 03:08:35.570324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-31 03:08:35.570335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-31 03:08:35.570389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-31 03:08:35.570402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-31 03:08:35.570412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-31 03:08:35.570421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-31 03:08:35.570430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-31 03:08:35.570455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-31 03:08:35.570471 | orchestrator | 2026-03-31 03:08:35.570487 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-03-31 03:08:35.570502 | orchestrator | Tuesday 31 March 2026 03:08:34 +0000 (0:00:03.380) 0:00:08.876 ********* 2026-03-31 03:08:35.570550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-31 03:08:36.370087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-31 03:08:36.370250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-31 03:08:36.370272 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:08:36.370288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-31 03:08:36.370323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-31 03:08:36.370341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-31 03:08:36.370353 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:08:36.370387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-31 03:08:36.370401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-31 03:08:36.370412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-31 03:08:36.370440 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:08:36.370458 | orchestrator | 2026-03-31 03:08:36.370477 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-03-31 03:08:36.370497 | orchestrator | Tuesday 31 March 2026 03:08:35 +0000 (0:00:00.590) 0:00:09.467 ********* 2026-03-31 03:08:36.370516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-31 03:08:36.370608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-31 03:08:36.370646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-31 03:08:39.745327 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:08:39.745441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-31 03:08:39.745463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-31 03:08:39.745502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-31 03:08:39.745516 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:08:39.745608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-31 03:08:39.745632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-31 03:08:39.745667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-31 03:08:39.745679 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:08:39.745690 | orchestrator | 2026-03-31 03:08:39.745702 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-03-31 03:08:39.745791 | orchestrator | Tuesday 31 March 2026 03:08:36 +0000 (0:00:00.797) 0:00:10.264 ********* 2026-03-31 03:08:39.745805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-31 03:08:39.745830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-31 03:08:39.745850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-31 03:08:39.745877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-31 03:08:44.582283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-31 03:08:44.582438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-31 03:08:44.582456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-31 03:08:44.582467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-31 03:08:44.582491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-31 03:08:44.582503 | orchestrator | 2026-03-31 03:08:44.582515 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-03-31 03:08:44.582548 | orchestrator | Tuesday 31 March 2026 03:08:39 +0000 (0:00:03.372) 0:00:13.636 ********* 2026-03-31 03:08:44.582602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-31 03:08:44.582630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-31 03:08:44.582643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-31 03:08:44.582654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-31 03:08:44.582671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-31 03:08:44.582690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-31 03:08:48.332953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-31 03:08:48.333030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-31 03:08:48.333036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-31 03:08:48.333041 | orchestrator | 2026-03-31 03:08:48.333046 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-03-31 03:08:48.333051 | orchestrator | Tuesday 31 March 2026 03:08:44 +0000 (0:00:04.838) 0:00:18.475 ********* 2026-03-31 03:08:48.333055 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:08:48.333060 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:08:48.333063 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:08:48.333067 | orchestrator | 2026-03-31 03:08:48.333071 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-03-31 03:08:48.333075 | orchestrator | Tuesday 31 March 2026 03:08:45 +0000 (0:00:01.401) 0:00:19.876 ********* 2026-03-31 03:08:48.333078 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:08:48.333082 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:08:48.333086 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:08:48.333089 | orchestrator | 2026-03-31 03:08:48.333093 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-03-31 03:08:48.333097 | orchestrator | Tuesday 31 March 2026 03:08:46 +0000 (0:00:00.825) 0:00:20.702 ********* 2026-03-31 03:08:48.333101 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:08:48.333116 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:08:48.333120 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:08:48.333123 | orchestrator | 2026-03-31 03:08:48.333127 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-03-31 03:08:48.333131 | orchestrator | Tuesday 31 March 2026 03:08:47 +0000 (0:00:00.574) 0:00:21.276 ********* 2026-03-31 03:08:48.333134 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:08:48.333138 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:08:48.333142 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:08:48.333146 | orchestrator | 2026-03-31 03:08:48.333150 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-03-31 03:08:48.333154 | orchestrator | Tuesday 31 March 2026 03:08:47 +0000 (0:00:00.328) 0:00:21.604 ********* 2026-03-31 03:08:48.333187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-31 03:08:48.333193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-31 03:08:48.333198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-31 03:08:48.333202 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:08:48.333207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-31 03:08:48.333214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-31 03:08:48.333224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-31 03:08:48.333228 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:08:48.333236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-31 03:09:07.480406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-31 03:09:07.480511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-31 03:09:07.480525 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:09:07.480534 | orchestrator | 2026-03-31 03:09:07.480590 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-31 03:09:07.480601 | orchestrator | Tuesday 31 March 2026 03:08:48 +0000 (0:00:00.620) 0:00:22.225 ********* 2026-03-31 03:09:07.480608 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:09:07.480615 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:09:07.480623 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:09:07.480630 | orchestrator | 2026-03-31 03:09:07.480638 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-03-31 03:09:07.480646 | orchestrator | Tuesday 31 March 2026 03:08:48 +0000 (0:00:00.300) 0:00:22.525 ********* 2026-03-31 03:09:07.480654 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-31 03:09:07.480684 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-31 03:09:07.480704 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-31 03:09:07.480709 | orchestrator | 2026-03-31 03:09:07.480713 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-03-31 03:09:07.480718 | orchestrator | Tuesday 31 March 2026 03:08:50 +0000 (0:00:01.837) 0:00:24.363 ********* 2026-03-31 03:09:07.480722 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-31 03:09:07.480727 | orchestrator | 2026-03-31 03:09:07.480731 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-03-31 03:09:07.480735 | orchestrator | Tuesday 31 March 2026 03:08:51 +0000 (0:00:00.943) 0:00:25.307 ********* 2026-03-31 03:09:07.480740 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:09:07.480744 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:09:07.480748 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:09:07.480752 | orchestrator | 2026-03-31 03:09:07.480757 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-03-31 03:09:07.480761 | orchestrator | Tuesday 31 March 2026 03:08:51 +0000 (0:00:00.576) 0:00:25.884 ********* 2026-03-31 03:09:07.480766 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-31 03:09:07.480770 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-31 03:09:07.480774 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-31 03:09:07.480779 | orchestrator | 2026-03-31 03:09:07.480783 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-03-31 03:09:07.480788 | orchestrator | Tuesday 31 March 2026 03:08:53 +0000 (0:00:01.093) 0:00:26.977 ********* 2026-03-31 03:09:07.480792 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:09:07.480798 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:09:07.480802 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:09:07.480806 | orchestrator | 2026-03-31 03:09:07.480811 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-03-31 03:09:07.480815 | orchestrator | Tuesday 31 March 2026 03:08:53 +0000 (0:00:00.566) 0:00:27.544 ********* 2026-03-31 03:09:07.480820 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-31 03:09:07.480824 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-31 03:09:07.480829 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-31 03:09:07.480833 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-31 03:09:07.480837 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-31 03:09:07.480842 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-31 03:09:07.480846 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-31 03:09:07.480851 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-31 03:09:07.480867 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-31 03:09:07.480872 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-31 03:09:07.480876 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-31 03:09:07.480880 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-31 03:09:07.480885 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-31 03:09:07.480889 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-31 03:09:07.480893 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-31 03:09:07.480902 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-31 03:09:07.480907 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-31 03:09:07.480911 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-31 03:09:07.480915 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-31 03:09:07.480920 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-31 03:09:07.480924 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-31 03:09:07.480928 | orchestrator | 2026-03-31 03:09:07.480933 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-03-31 03:09:07.480937 | orchestrator | Tuesday 31 March 2026 03:09:02 +0000 (0:00:08.884) 0:00:36.429 ********* 2026-03-31 03:09:07.480941 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-31 03:09:07.480946 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-31 03:09:07.480950 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-31 03:09:07.480954 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-31 03:09:07.480958 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-31 03:09:07.480963 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-31 03:09:07.480969 | orchestrator | 2026-03-31 03:09:07.480977 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-03-31 03:09:07.480983 | orchestrator | Tuesday 31 March 2026 03:09:05 +0000 (0:00:02.655) 0:00:39.085 ********* 2026-03-31 03:09:07.480990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-31 03:09:07.481002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-31 03:10:53.685464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-31 03:10:53.685573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-31 03:10:53.685634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-31 03:10:53.685651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-31 03:10:53.685662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-31 03:10:53.685689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-31 03:10:53.685724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-31 03:10:53.685735 | orchestrator | 2026-03-31 03:10:53.685747 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-31 03:10:53.685758 | orchestrator | Tuesday 31 March 2026 03:09:07 +0000 (0:00:02.288) 0:00:41.374 ********* 2026-03-31 03:10:53.685768 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:10:53.685778 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:10:53.685787 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:10:53.685797 | orchestrator | 2026-03-31 03:10:53.685806 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-03-31 03:10:53.685815 | orchestrator | Tuesday 31 March 2026 03:09:08 +0000 (0:00:00.552) 0:00:41.926 ********* 2026-03-31 03:10:53.685825 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:10:53.685834 | orchestrator | 2026-03-31 03:10:53.685844 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-03-31 03:10:53.685853 | orchestrator | Tuesday 31 March 2026 03:09:10 +0000 (0:00:02.450) 0:00:44.377 ********* 2026-03-31 03:10:53.685862 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:10:53.685872 | orchestrator | 2026-03-31 03:10:53.685881 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-03-31 03:10:53.685890 | orchestrator | Tuesday 31 March 2026 03:09:12 +0000 (0:00:02.286) 0:00:46.664 ********* 2026-03-31 03:10:53.685900 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:10:53.685909 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:10:53.685920 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:10:53.685936 | orchestrator | 2026-03-31 03:10:53.685952 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-03-31 03:10:53.685969 | orchestrator | Tuesday 31 March 2026 03:09:13 +0000 (0:00:00.841) 0:00:47.505 ********* 2026-03-31 03:10:53.685986 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:10:53.686002 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:10:53.686071 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:10:53.686093 | orchestrator | 2026-03-31 03:10:53.686129 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-03-31 03:10:53.686148 | orchestrator | Tuesday 31 March 2026 03:09:13 +0000 (0:00:00.367) 0:00:47.873 ********* 2026-03-31 03:10:53.686165 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:10:53.686182 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:10:53.686199 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:10:53.686212 | orchestrator | 2026-03-31 03:10:53.686221 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-03-31 03:10:53.686231 | orchestrator | Tuesday 31 March 2026 03:09:14 +0000 (0:00:00.594) 0:00:48.467 ********* 2026-03-31 03:10:53.686240 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:10:53.686249 | orchestrator | 2026-03-31 03:10:53.686259 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-03-31 03:10:53.686268 | orchestrator | Tuesday 31 March 2026 03:09:30 +0000 (0:00:15.469) 0:01:03.937 ********* 2026-03-31 03:10:53.686278 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:10:53.686287 | orchestrator | 2026-03-31 03:10:53.686296 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-31 03:10:53.686317 | orchestrator | Tuesday 31 March 2026 03:09:41 +0000 (0:00:11.704) 0:01:15.642 ********* 2026-03-31 03:10:53.686326 | orchestrator | 2026-03-31 03:10:53.686336 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-31 03:10:53.686345 | orchestrator | Tuesday 31 March 2026 03:09:41 +0000 (0:00:00.066) 0:01:15.709 ********* 2026-03-31 03:10:53.686355 | orchestrator | 2026-03-31 03:10:53.686364 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-31 03:10:53.686374 | orchestrator | Tuesday 31 March 2026 03:09:41 +0000 (0:00:00.070) 0:01:15.779 ********* 2026-03-31 03:10:53.686383 | orchestrator | 2026-03-31 03:10:53.686392 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-03-31 03:10:53.686402 | orchestrator | Tuesday 31 March 2026 03:09:41 +0000 (0:00:00.077) 0:01:15.856 ********* 2026-03-31 03:10:53.686411 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:10:53.686421 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:10:53.686431 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:10:53.686440 | orchestrator | 2026-03-31 03:10:53.686449 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-03-31 03:10:53.686459 | orchestrator | Tuesday 31 March 2026 03:10:30 +0000 (0:00:48.347) 0:02:04.204 ********* 2026-03-31 03:10:53.686468 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:10:53.686478 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:10:53.686488 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:10:53.686502 | orchestrator | 2026-03-31 03:10:53.686518 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-03-31 03:10:53.686535 | orchestrator | Tuesday 31 March 2026 03:10:40 +0000 (0:00:10.367) 0:02:14.571 ********* 2026-03-31 03:10:53.686550 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:10:53.686566 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:10:53.686583 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:10:53.686600 | orchestrator | 2026-03-31 03:10:53.686654 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-31 03:10:53.686672 | orchestrator | Tuesday 31 March 2026 03:10:53 +0000 (0:00:12.395) 0:02:26.966 ********* 2026-03-31 03:10:53.686698 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 03:11:47.080325 | orchestrator | 2026-03-31 03:11:47.080485 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-03-31 03:11:47.080514 | orchestrator | Tuesday 31 March 2026 03:10:53 +0000 (0:00:00.612) 0:02:27.579 ********* 2026-03-31 03:11:47.080528 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:11:47.080541 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:11:47.080552 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:11:47.080562 | orchestrator | 2026-03-31 03:11:47.080574 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-03-31 03:11:47.080585 | orchestrator | Tuesday 31 March 2026 03:10:54 +0000 (0:00:01.232) 0:02:28.811 ********* 2026-03-31 03:11:47.080596 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:11:47.080607 | orchestrator | 2026-03-31 03:11:47.080618 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-03-31 03:11:47.080629 | orchestrator | Tuesday 31 March 2026 03:10:56 +0000 (0:00:01.948) 0:02:30.760 ********* 2026-03-31 03:11:47.080688 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-03-31 03:11:47.080706 | orchestrator | 2026-03-31 03:11:47.080725 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-03-31 03:11:47.080744 | orchestrator | Tuesday 31 March 2026 03:11:09 +0000 (0:00:12.386) 0:02:43.146 ********* 2026-03-31 03:11:47.080761 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-03-31 03:11:47.080781 | orchestrator | 2026-03-31 03:11:47.080801 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-03-31 03:11:47.080820 | orchestrator | Tuesday 31 March 2026 03:11:34 +0000 (0:00:25.482) 0:03:08.628 ********* 2026-03-31 03:11:47.080873 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-03-31 03:11:47.080897 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-03-31 03:11:47.080916 | orchestrator | 2026-03-31 03:11:47.080934 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-03-31 03:11:47.080953 | orchestrator | Tuesday 31 March 2026 03:11:41 +0000 (0:00:07.063) 0:03:15.692 ********* 2026-03-31 03:11:47.080972 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:11:47.080992 | orchestrator | 2026-03-31 03:11:47.081011 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-03-31 03:11:47.081031 | orchestrator | Tuesday 31 March 2026 03:11:41 +0000 (0:00:00.140) 0:03:15.833 ********* 2026-03-31 03:11:47.081048 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:11:47.081061 | orchestrator | 2026-03-31 03:11:47.081074 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-03-31 03:11:47.081102 | orchestrator | Tuesday 31 March 2026 03:11:42 +0000 (0:00:00.195) 0:03:16.028 ********* 2026-03-31 03:11:47.081115 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:11:47.081128 | orchestrator | 2026-03-31 03:11:47.081141 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-03-31 03:11:47.081154 | orchestrator | Tuesday 31 March 2026 03:11:42 +0000 (0:00:00.141) 0:03:16.170 ********* 2026-03-31 03:11:47.081166 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:11:47.081178 | orchestrator | 2026-03-31 03:11:47.081190 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-03-31 03:11:47.081202 | orchestrator | Tuesday 31 March 2026 03:11:42 +0000 (0:00:00.583) 0:03:16.753 ********* 2026-03-31 03:11:47.081215 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:11:47.081227 | orchestrator | 2026-03-31 03:11:47.081238 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-31 03:11:47.081249 | orchestrator | Tuesday 31 March 2026 03:11:46 +0000 (0:00:03.265) 0:03:20.018 ********* 2026-03-31 03:11:47.081259 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:11:47.081270 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:11:47.081280 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:11:47.081291 | orchestrator | 2026-03-31 03:11:47.081301 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 03:11:47.081314 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-31 03:11:47.081326 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-31 03:11:47.081337 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-31 03:11:47.081347 | orchestrator | 2026-03-31 03:11:47.081358 | orchestrator | 2026-03-31 03:11:47.081369 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 03:11:47.081379 | orchestrator | Tuesday 31 March 2026 03:11:46 +0000 (0:00:00.518) 0:03:20.537 ********* 2026-03-31 03:11:47.081390 | orchestrator | =============================================================================== 2026-03-31 03:11:47.081401 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 48.35s 2026-03-31 03:11:47.081411 | orchestrator | service-ks-register : keystone | Creating services --------------------- 25.48s 2026-03-31 03:11:47.081422 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 15.47s 2026-03-31 03:11:47.081432 | orchestrator | keystone : Restart keystone container ---------------------------------- 12.40s 2026-03-31 03:11:47.081443 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 12.39s 2026-03-31 03:11:47.081453 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.70s 2026-03-31 03:11:47.081464 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.37s 2026-03-31 03:11:47.081484 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.89s 2026-03-31 03:11:47.081495 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.06s 2026-03-31 03:11:47.081528 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.84s 2026-03-31 03:11:47.081540 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.38s 2026-03-31 03:11:47.081550 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.37s 2026-03-31 03:11:47.081561 | orchestrator | keystone : Creating default user role ----------------------------------- 3.27s 2026-03-31 03:11:47.081572 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.66s 2026-03-31 03:11:47.081582 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.45s 2026-03-31 03:11:47.081593 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.29s 2026-03-31 03:11:47.081604 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.29s 2026-03-31 03:11:47.081614 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.95s 2026-03-31 03:11:47.081625 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.84s 2026-03-31 03:11:47.081636 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.75s 2026-03-31 03:11:49.539952 | orchestrator | 2026-03-31 03:11:49 | INFO  | Task cf5a7e6e-7fd7-467c-88cf-b376804b17a6 (placement) was prepared for execution. 2026-03-31 03:11:49.540067 | orchestrator | 2026-03-31 03:11:49 | INFO  | It takes a moment until task cf5a7e6e-7fd7-467c-88cf-b376804b17a6 (placement) has been started and output is visible here. 2026-03-31 03:12:25.686375 | orchestrator | 2026-03-31 03:12:25.686503 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-31 03:12:25.686521 | orchestrator | 2026-03-31 03:12:25.686533 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-31 03:12:25.686545 | orchestrator | Tuesday 31 March 2026 03:11:53 +0000 (0:00:00.267) 0:00:00.267 ********* 2026-03-31 03:12:25.686557 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:12:25.686572 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:12:25.686583 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:12:25.686594 | orchestrator | 2026-03-31 03:12:25.686605 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-31 03:12:25.686613 | orchestrator | Tuesday 31 March 2026 03:11:54 +0000 (0:00:00.345) 0:00:00.613 ********* 2026-03-31 03:12:25.686621 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-03-31 03:12:25.686642 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-03-31 03:12:25.686649 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-03-31 03:12:25.686656 | orchestrator | 2026-03-31 03:12:25.686663 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-03-31 03:12:25.686719 | orchestrator | 2026-03-31 03:12:25.686727 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-31 03:12:25.686734 | orchestrator | Tuesday 31 March 2026 03:11:54 +0000 (0:00:00.476) 0:00:01.089 ********* 2026-03-31 03:12:25.686742 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 03:12:25.686750 | orchestrator | 2026-03-31 03:12:25.686757 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-03-31 03:12:25.686764 | orchestrator | Tuesday 31 March 2026 03:11:55 +0000 (0:00:00.585) 0:00:01.675 ********* 2026-03-31 03:12:25.686771 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-03-31 03:12:25.686778 | orchestrator | 2026-03-31 03:12:25.686784 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-03-31 03:12:25.686791 | orchestrator | Tuesday 31 March 2026 03:11:59 +0000 (0:00:04.004) 0:00:05.679 ********* 2026-03-31 03:12:25.686818 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-03-31 03:12:25.686826 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-03-31 03:12:25.686833 | orchestrator | 2026-03-31 03:12:25.686840 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-03-31 03:12:25.686846 | orchestrator | Tuesday 31 March 2026 03:12:06 +0000 (0:00:06.891) 0:00:12.570 ********* 2026-03-31 03:12:25.686853 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-03-31 03:12:25.686860 | orchestrator | 2026-03-31 03:12:25.686866 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-03-31 03:12:25.686873 | orchestrator | Tuesday 31 March 2026 03:12:09 +0000 (0:00:03.679) 0:00:16.250 ********* 2026-03-31 03:12:25.686880 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-31 03:12:25.686886 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-03-31 03:12:25.686893 | orchestrator | 2026-03-31 03:12:25.686900 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-03-31 03:12:25.686908 | orchestrator | Tuesday 31 March 2026 03:12:13 +0000 (0:00:04.131) 0:00:20.381 ********* 2026-03-31 03:12:25.686915 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-31 03:12:25.686923 | orchestrator | 2026-03-31 03:12:25.686930 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-03-31 03:12:25.686938 | orchestrator | Tuesday 31 March 2026 03:12:17 +0000 (0:00:03.320) 0:00:23.701 ********* 2026-03-31 03:12:25.686946 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-03-31 03:12:25.686953 | orchestrator | 2026-03-31 03:12:25.686961 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-31 03:12:25.686968 | orchestrator | Tuesday 31 March 2026 03:12:21 +0000 (0:00:03.898) 0:00:27.600 ********* 2026-03-31 03:12:25.686976 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:12:25.686983 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:12:25.686991 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:12:25.686998 | orchestrator | 2026-03-31 03:12:25.687006 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-03-31 03:12:25.687013 | orchestrator | Tuesday 31 March 2026 03:12:21 +0000 (0:00:00.312) 0:00:27.913 ********* 2026-03-31 03:12:25.687024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-31 03:12:25.687059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-31 03:12:25.687074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-31 03:12:25.687082 | orchestrator | 2026-03-31 03:12:25.687090 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-03-31 03:12:25.687098 | orchestrator | Tuesday 31 March 2026 03:12:22 +0000 (0:00:01.198) 0:00:29.111 ********* 2026-03-31 03:12:25.687106 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:12:25.687114 | orchestrator | 2026-03-31 03:12:25.687121 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-03-31 03:12:25.687138 | orchestrator | Tuesday 31 March 2026 03:12:23 +0000 (0:00:00.350) 0:00:29.461 ********* 2026-03-31 03:12:25.687146 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:12:25.687161 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:12:25.687169 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:12:25.687180 | orchestrator | 2026-03-31 03:12:25.687192 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-31 03:12:25.687203 | orchestrator | Tuesday 31 March 2026 03:12:23 +0000 (0:00:00.328) 0:00:29.789 ********* 2026-03-31 03:12:25.687214 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 03:12:25.687225 | orchestrator | 2026-03-31 03:12:25.687237 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-03-31 03:12:25.687249 | orchestrator | Tuesday 31 March 2026 03:12:23 +0000 (0:00:00.563) 0:00:30.353 ********* 2026-03-31 03:12:25.687261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-31 03:12:25.687282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-31 03:12:28.631541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-31 03:12:28.631635 | orchestrator | 2026-03-31 03:12:28.631649 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-03-31 03:12:28.631657 | orchestrator | Tuesday 31 March 2026 03:12:25 +0000 (0:00:01.698) 0:00:32.051 ********* 2026-03-31 03:12:28.631665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-31 03:12:28.631719 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:12:28.631729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-31 03:12:28.631735 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:12:28.631741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-31 03:12:28.631770 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:12:28.631777 | orchestrator | 2026-03-31 03:12:28.631783 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-03-31 03:12:28.631803 | orchestrator | Tuesday 31 March 2026 03:12:26 +0000 (0:00:00.520) 0:00:32.572 ********* 2026-03-31 03:12:28.631817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-31 03:12:28.631823 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:12:28.631828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-31 03:12:28.631834 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:12:28.631840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-31 03:12:28.631846 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:12:28.631851 | orchestrator | 2026-03-31 03:12:28.631857 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-03-31 03:12:28.631863 | orchestrator | Tuesday 31 March 2026 03:12:26 +0000 (0:00:00.739) 0:00:33.312 ********* 2026-03-31 03:12:28.631869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-31 03:12:28.631893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-31 03:12:35.914337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-31 03:12:35.915182 | orchestrator | 2026-03-31 03:12:35.915205 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-03-31 03:12:35.915213 | orchestrator | Tuesday 31 March 2026 03:12:28 +0000 (0:00:01.687) 0:00:35.000 ********* 2026-03-31 03:12:35.915221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-31 03:12:35.915230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-31 03:12:35.915264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-31 03:12:35.915271 | orchestrator | 2026-03-31 03:12:35.915277 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-03-31 03:12:35.915283 | orchestrator | Tuesday 31 March 2026 03:12:31 +0000 (0:00:02.408) 0:00:37.408 ********* 2026-03-31 03:12:35.915302 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-31 03:12:35.915310 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-31 03:12:35.915315 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-31 03:12:35.915321 | orchestrator | 2026-03-31 03:12:35.915327 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-03-31 03:12:35.915332 | orchestrator | Tuesday 31 March 2026 03:12:32 +0000 (0:00:01.465) 0:00:38.874 ********* 2026-03-31 03:12:35.915338 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:12:35.915345 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:12:35.915351 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:12:35.915357 | orchestrator | 2026-03-31 03:12:35.915362 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-03-31 03:12:35.915368 | orchestrator | Tuesday 31 March 2026 03:12:33 +0000 (0:00:01.398) 0:00:40.272 ********* 2026-03-31 03:12:35.915374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-31 03:12:35.915385 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:12:35.915391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-31 03:12:35.915397 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:12:35.915403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-31 03:12:35.915409 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:12:35.915415 | orchestrator | 2026-03-31 03:12:35.915424 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-03-31 03:12:35.915430 | orchestrator | Tuesday 31 March 2026 03:12:34 +0000 (0:00:00.821) 0:00:41.093 ********* 2026-03-31 03:12:35.915442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-31 03:13:06.150819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-31 03:13:06.150950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-31 03:13:06.150961 | orchestrator | 2026-03-31 03:13:06.150969 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-03-31 03:13:06.150977 | orchestrator | Tuesday 31 March 2026 03:12:35 +0000 (0:00:01.194) 0:00:42.288 ********* 2026-03-31 03:13:06.150983 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:13:06.150991 | orchestrator | 2026-03-31 03:13:06.150997 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-03-31 03:13:06.151004 | orchestrator | Tuesday 31 March 2026 03:12:38 +0000 (0:00:02.249) 0:00:44.538 ********* 2026-03-31 03:13:06.151010 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:13:06.151016 | orchestrator | 2026-03-31 03:13:06.151023 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-03-31 03:13:06.151029 | orchestrator | Tuesday 31 March 2026 03:12:40 +0000 (0:00:02.359) 0:00:46.898 ********* 2026-03-31 03:13:06.151035 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:13:06.151041 | orchestrator | 2026-03-31 03:13:06.151047 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-31 03:13:06.151053 | orchestrator | Tuesday 31 March 2026 03:12:55 +0000 (0:00:14.557) 0:01:01.456 ********* 2026-03-31 03:13:06.151059 | orchestrator | 2026-03-31 03:13:06.151065 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-31 03:13:06.151072 | orchestrator | Tuesday 31 March 2026 03:12:55 +0000 (0:00:00.079) 0:01:01.535 ********* 2026-03-31 03:13:06.151078 | orchestrator | 2026-03-31 03:13:06.151085 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-31 03:13:06.151091 | orchestrator | Tuesday 31 March 2026 03:12:55 +0000 (0:00:00.079) 0:01:01.614 ********* 2026-03-31 03:13:06.151097 | orchestrator | 2026-03-31 03:13:06.151103 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-03-31 03:13:06.151122 | orchestrator | Tuesday 31 March 2026 03:12:55 +0000 (0:00:00.077) 0:01:01.691 ********* 2026-03-31 03:13:06.151128 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:13:06.151134 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:13:06.151140 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:13:06.151146 | orchestrator | 2026-03-31 03:13:06.151153 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 03:13:06.151160 | orchestrator | testbed-node-0 : ok=21  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-31 03:13:06.151167 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-31 03:13:06.151174 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-31 03:13:06.151180 | orchestrator | 2026-03-31 03:13:06.151186 | orchestrator | 2026-03-31 03:13:06.151192 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 03:13:06.151204 | orchestrator | Tuesday 31 March 2026 03:13:05 +0000 (0:00:10.447) 0:01:12.139 ********* 2026-03-31 03:13:06.151211 | orchestrator | =============================================================================== 2026-03-31 03:13:06.151217 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.56s 2026-03-31 03:13:06.151237 | orchestrator | placement : Restart placement-api container ---------------------------- 10.45s 2026-03-31 03:13:06.151244 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.89s 2026-03-31 03:13:06.151250 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.13s 2026-03-31 03:13:06.151256 | orchestrator | service-ks-register : placement | Creating services --------------------- 4.00s 2026-03-31 03:13:06.151263 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.90s 2026-03-31 03:13:06.151269 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.68s 2026-03-31 03:13:06.151275 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.32s 2026-03-31 03:13:06.151280 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.41s 2026-03-31 03:13:06.151286 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.36s 2026-03-31 03:13:06.151292 | orchestrator | placement : Creating placement databases -------------------------------- 2.25s 2026-03-31 03:13:06.151299 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.70s 2026-03-31 03:13:06.151304 | orchestrator | placement : Copying over config.json files for services ----------------- 1.69s 2026-03-31 03:13:06.151311 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.47s 2026-03-31 03:13:06.151317 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.40s 2026-03-31 03:13:06.151323 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.20s 2026-03-31 03:13:06.151330 | orchestrator | placement : Check placement containers ---------------------------------- 1.19s 2026-03-31 03:13:06.151336 | orchestrator | placement : Copying over existing policy file --------------------------- 0.82s 2026-03-31 03:13:06.151342 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.74s 2026-03-31 03:13:06.151348 | orchestrator | placement : include_tasks ----------------------------------------------- 0.59s 2026-03-31 03:13:08.658177 | orchestrator | 2026-03-31 03:13:08 | INFO  | Task f2219d06-89fe-4b3a-b651-cd596f6ba97d (neutron) was prepared for execution. 2026-03-31 03:13:08.658334 | orchestrator | 2026-03-31 03:13:08 | INFO  | It takes a moment until task f2219d06-89fe-4b3a-b651-cd596f6ba97d (neutron) has been started and output is visible here. 2026-03-31 03:13:59.156419 | orchestrator | 2026-03-31 03:13:59.156535 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-31 03:13:59.156553 | orchestrator | 2026-03-31 03:13:59.156566 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-31 03:13:59.156578 | orchestrator | Tuesday 31 March 2026 03:13:13 +0000 (0:00:00.263) 0:00:00.263 ********* 2026-03-31 03:13:59.156589 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:13:59.156602 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:13:59.156613 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:13:59.156624 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:13:59.156636 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:13:59.156647 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:13:59.156658 | orchestrator | 2026-03-31 03:13:59.156669 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-31 03:13:59.156681 | orchestrator | Tuesday 31 March 2026 03:13:13 +0000 (0:00:00.741) 0:00:01.005 ********* 2026-03-31 03:13:59.156692 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-03-31 03:13:59.156704 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-03-31 03:13:59.156715 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-03-31 03:13:59.156726 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-03-31 03:13:59.156792 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-03-31 03:13:59.156806 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-03-31 03:13:59.156817 | orchestrator | 2026-03-31 03:13:59.156828 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-03-31 03:13:59.156839 | orchestrator | 2026-03-31 03:13:59.156849 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-31 03:13:59.156875 | orchestrator | Tuesday 31 March 2026 03:13:14 +0000 (0:00:00.685) 0:00:01.690 ********* 2026-03-31 03:13:59.156888 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 03:13:59.156900 | orchestrator | 2026-03-31 03:13:59.156910 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-03-31 03:13:59.156921 | orchestrator | Tuesday 31 March 2026 03:13:15 +0000 (0:00:01.346) 0:00:03.037 ********* 2026-03-31 03:13:59.156931 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:13:59.156942 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:13:59.156953 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:13:59.156966 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:13:59.156978 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:13:59.156990 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:13:59.157002 | orchestrator | 2026-03-31 03:13:59.157015 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-03-31 03:13:59.157027 | orchestrator | Tuesday 31 March 2026 03:13:17 +0000 (0:00:01.421) 0:00:04.458 ********* 2026-03-31 03:13:59.157040 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:13:59.157052 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:13:59.157064 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:13:59.157076 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:13:59.157088 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:13:59.157100 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:13:59.157112 | orchestrator | 2026-03-31 03:13:59.157125 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-03-31 03:13:59.157137 | orchestrator | Tuesday 31 March 2026 03:13:18 +0000 (0:00:01.092) 0:00:05.551 ********* 2026-03-31 03:13:59.157150 | orchestrator | ok: [testbed-node-0] => { 2026-03-31 03:13:59.157162 | orchestrator |  "changed": false, 2026-03-31 03:13:59.157174 | orchestrator |  "msg": "All assertions passed" 2026-03-31 03:13:59.157187 | orchestrator | } 2026-03-31 03:13:59.157199 | orchestrator | ok: [testbed-node-1] => { 2026-03-31 03:13:59.157212 | orchestrator |  "changed": false, 2026-03-31 03:13:59.157224 | orchestrator |  "msg": "All assertions passed" 2026-03-31 03:13:59.157236 | orchestrator | } 2026-03-31 03:13:59.157248 | orchestrator | ok: [testbed-node-2] => { 2026-03-31 03:13:59.157260 | orchestrator |  "changed": false, 2026-03-31 03:13:59.157272 | orchestrator |  "msg": "All assertions passed" 2026-03-31 03:13:59.157284 | orchestrator | } 2026-03-31 03:13:59.157297 | orchestrator | ok: [testbed-node-3] => { 2026-03-31 03:13:59.157309 | orchestrator |  "changed": false, 2026-03-31 03:13:59.157322 | orchestrator |  "msg": "All assertions passed" 2026-03-31 03:13:59.157334 | orchestrator | } 2026-03-31 03:13:59.157345 | orchestrator | ok: [testbed-node-4] => { 2026-03-31 03:13:59.157356 | orchestrator |  "changed": false, 2026-03-31 03:13:59.157367 | orchestrator |  "msg": "All assertions passed" 2026-03-31 03:13:59.157378 | orchestrator | } 2026-03-31 03:13:59.157388 | orchestrator | ok: [testbed-node-5] => { 2026-03-31 03:13:59.157399 | orchestrator |  "changed": false, 2026-03-31 03:13:59.157409 | orchestrator |  "msg": "All assertions passed" 2026-03-31 03:13:59.157420 | orchestrator | } 2026-03-31 03:13:59.157431 | orchestrator | 2026-03-31 03:13:59.157441 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-03-31 03:13:59.157452 | orchestrator | Tuesday 31 March 2026 03:13:19 +0000 (0:00:00.874) 0:00:06.426 ********* 2026-03-31 03:13:59.157463 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:13:59.157482 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:13:59.157493 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:13:59.157504 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:13:59.157514 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:13:59.157525 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:13:59.157535 | orchestrator | 2026-03-31 03:13:59.157546 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-03-31 03:13:59.157557 | orchestrator | Tuesday 31 March 2026 03:13:20 +0000 (0:00:00.740) 0:00:07.166 ********* 2026-03-31 03:13:59.157568 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-03-31 03:13:59.157579 | orchestrator | 2026-03-31 03:13:59.157589 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-03-31 03:13:59.157600 | orchestrator | Tuesday 31 March 2026 03:13:24 +0000 (0:00:04.142) 0:00:11.309 ********* 2026-03-31 03:13:59.157611 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-03-31 03:13:59.157623 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-03-31 03:13:59.157634 | orchestrator | 2026-03-31 03:13:59.157662 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-03-31 03:13:59.157674 | orchestrator | Tuesday 31 March 2026 03:13:30 +0000 (0:00:06.668) 0:00:17.978 ********* 2026-03-31 03:13:59.157685 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-31 03:13:59.157696 | orchestrator | 2026-03-31 03:13:59.157707 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-03-31 03:13:59.157717 | orchestrator | Tuesday 31 March 2026 03:13:34 +0000 (0:00:03.340) 0:00:21.319 ********* 2026-03-31 03:13:59.157728 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-31 03:13:59.157739 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-03-31 03:13:59.157785 | orchestrator | 2026-03-31 03:13:59.157798 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-03-31 03:13:59.157809 | orchestrator | Tuesday 31 March 2026 03:13:38 +0000 (0:00:03.973) 0:00:25.292 ********* 2026-03-31 03:13:59.157820 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-31 03:13:59.157831 | orchestrator | 2026-03-31 03:13:59.157842 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-03-31 03:13:59.157852 | orchestrator | Tuesday 31 March 2026 03:13:41 +0000 (0:00:03.298) 0:00:28.590 ********* 2026-03-31 03:13:59.157863 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-03-31 03:13:59.157873 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-03-31 03:13:59.157884 | orchestrator | 2026-03-31 03:13:59.157895 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-31 03:13:59.157906 | orchestrator | Tuesday 31 March 2026 03:13:49 +0000 (0:00:08.293) 0:00:36.884 ********* 2026-03-31 03:13:59.157916 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:13:59.157927 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:13:59.157944 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:13:59.157955 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:13:59.157965 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:13:59.157976 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:13:59.157987 | orchestrator | 2026-03-31 03:13:59.157997 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-03-31 03:13:59.158008 | orchestrator | Tuesday 31 March 2026 03:13:50 +0000 (0:00:00.885) 0:00:37.769 ********* 2026-03-31 03:13:59.158057 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:13:59.158071 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:13:59.158081 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:13:59.158092 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:13:59.158102 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:13:59.158113 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:13:59.158124 | orchestrator | 2026-03-31 03:13:59.158142 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-03-31 03:13:59.158153 | orchestrator | Tuesday 31 March 2026 03:13:52 +0000 (0:00:02.267) 0:00:40.037 ********* 2026-03-31 03:13:59.158164 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:13:59.158175 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:13:59.158185 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:13:59.158196 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:13:59.158207 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:13:59.158217 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:13:59.158228 | orchestrator | 2026-03-31 03:13:59.158239 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-31 03:13:59.158250 | orchestrator | Tuesday 31 March 2026 03:13:54 +0000 (0:00:01.260) 0:00:41.298 ********* 2026-03-31 03:13:59.158260 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:13:59.158271 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:13:59.158282 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:13:59.158292 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:13:59.158303 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:13:59.158313 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:13:59.158324 | orchestrator | 2026-03-31 03:13:59.158335 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-03-31 03:13:59.158346 | orchestrator | Tuesday 31 March 2026 03:13:56 +0000 (0:00:02.366) 0:00:43.665 ********* 2026-03-31 03:13:59.158360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-31 03:13:59.158387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-31 03:14:04.816751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-31 03:14:04.816956 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-31 03:14:04.816976 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-31 03:14:04.816988 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-31 03:14:04.817000 | orchestrator | 2026-03-31 03:14:04.817013 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-03-31 03:14:04.817026 | orchestrator | Tuesday 31 March 2026 03:13:59 +0000 (0:00:02.624) 0:00:46.289 ********* 2026-03-31 03:14:04.817039 | orchestrator | [WARNING]: Skipped 2026-03-31 03:14:04.817059 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-03-31 03:14:04.817079 | orchestrator | due to this access issue: 2026-03-31 03:14:04.817099 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-03-31 03:14:04.817117 | orchestrator | a directory 2026-03-31 03:14:04.817137 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-31 03:14:04.817156 | orchestrator | 2026-03-31 03:14:04.817175 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-31 03:14:04.817194 | orchestrator | Tuesday 31 March 2026 03:14:00 +0000 (0:00:00.932) 0:00:47.221 ********* 2026-03-31 03:14:04.817214 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 03:14:04.817229 | orchestrator | 2026-03-31 03:14:04.817240 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-03-31 03:14:04.817270 | orchestrator | Tuesday 31 March 2026 03:14:01 +0000 (0:00:01.326) 0:00:48.548 ********* 2026-03-31 03:14:04.817293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-31 03:14:04.817319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-31 03:14:04.817333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-31 03:14:04.817346 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-31 03:14:04.817368 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-31 03:14:10.254667 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-31 03:14:10.254839 | orchestrator | 2026-03-31 03:14:10.254862 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-03-31 03:14:10.254878 | orchestrator | Tuesday 31 March 2026 03:14:04 +0000 (0:00:03.390) 0:00:51.938 ********* 2026-03-31 03:14:10.254895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-31 03:14:10.254910 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:14:10.254925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-31 03:14:10.254939 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:14:10.254954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-31 03:14:10.254991 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:14:10.255026 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-31 03:14:10.255041 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:14:10.255063 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-31 03:14:10.255077 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:14:10.255091 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-31 03:14:10.255105 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:14:10.255118 | orchestrator | 2026-03-31 03:14:10.255132 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-03-31 03:14:10.255146 | orchestrator | Tuesday 31 March 2026 03:14:07 +0000 (0:00:02.432) 0:00:54.371 ********* 2026-03-31 03:14:10.255159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-31 03:14:10.255174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-31 03:14:10.255206 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:14:16.288326 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:14:16.288483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-31 03:14:16.288513 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:14:16.288528 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-31 03:14:16.288540 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:14:16.288551 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-31 03:14:16.288561 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:14:16.288571 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-31 03:14:16.288603 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:14:16.288614 | orchestrator | 2026-03-31 03:14:16.288624 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-03-31 03:14:16.288635 | orchestrator | Tuesday 31 March 2026 03:14:10 +0000 (0:00:03.015) 0:00:57.386 ********* 2026-03-31 03:14:16.288645 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:14:16.288654 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:14:16.288664 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:14:16.288673 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:14:16.288683 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:14:16.288692 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:14:16.288701 | orchestrator | 2026-03-31 03:14:16.288711 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-03-31 03:14:16.288721 | orchestrator | Tuesday 31 March 2026 03:14:12 +0000 (0:00:02.544) 0:00:59.930 ********* 2026-03-31 03:14:16.288730 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:14:16.288740 | orchestrator | 2026-03-31 03:14:16.288749 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-03-31 03:14:16.288807 | orchestrator | Tuesday 31 March 2026 03:14:12 +0000 (0:00:00.182) 0:01:00.113 ********* 2026-03-31 03:14:16.288826 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:14:16.288850 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:14:16.288869 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:14:16.288886 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:14:16.288902 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:14:16.288917 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:14:16.288934 | orchestrator | 2026-03-31 03:14:16.288951 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-03-31 03:14:16.288968 | orchestrator | Tuesday 31 March 2026 03:14:13 +0000 (0:00:00.668) 0:01:00.782 ********* 2026-03-31 03:14:16.288997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-31 03:14:16.289016 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:14:16.289035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-31 03:14:16.289068 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:14:16.289086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-31 03:14:16.289099 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:14:16.289109 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-31 03:14:16.289119 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:14:16.289160 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-31 03:14:26.100045 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:14:26.100122 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-31 03:14:26.100131 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:14:26.100136 | orchestrator | 2026-03-31 03:14:26.100141 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-03-31 03:14:26.100147 | orchestrator | Tuesday 31 March 2026 03:14:16 +0000 (0:00:02.631) 0:01:03.413 ********* 2026-03-31 03:14:26.100152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-31 03:14:26.100174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-31 03:14:26.100178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-31 03:14:26.100204 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-31 03:14:26.100210 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-31 03:14:26.100218 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-31 03:14:26.100222 | orchestrator | 2026-03-31 03:14:26.100226 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-03-31 03:14:26.100230 | orchestrator | Tuesday 31 March 2026 03:14:19 +0000 (0:00:03.377) 0:01:06.790 ********* 2026-03-31 03:14:26.100234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-31 03:14:26.100239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-31 03:14:26.100250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-31 03:14:31.184503 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-31 03:14:31.184628 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-31 03:14:31.184646 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-31 03:14:31.184660 | orchestrator | 2026-03-31 03:14:31.184673 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-03-31 03:14:31.184687 | orchestrator | Tuesday 31 March 2026 03:14:26 +0000 (0:00:06.440) 0:01:13.231 ********* 2026-03-31 03:14:31.184713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-31 03:14:31.184727 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:14:31.184758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-31 03:14:31.184780 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:14:31.184792 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-31 03:14:31.184862 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:14:31.184874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-31 03:14:31.184885 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:14:31.184896 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-31 03:14:31.184908 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:14:31.184924 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-31 03:14:31.184936 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:14:31.184955 | orchestrator | 2026-03-31 03:14:31.184966 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-03-31 03:14:31.184977 | orchestrator | Tuesday 31 March 2026 03:14:28 +0000 (0:00:02.258) 0:01:15.490 ********* 2026-03-31 03:14:31.184988 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:14:31.184999 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:14:31.185010 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:14:31.185020 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:14:31.185033 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:14:31.185045 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:14:31.185058 | orchestrator | 2026-03-31 03:14:31.185071 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-03-31 03:14:31.185092 | orchestrator | Tuesday 31 March 2026 03:14:31 +0000 (0:00:02.822) 0:01:18.312 ********* 2026-03-31 03:14:51.806140 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-31 03:14:51.806266 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:14:51.806288 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-31 03:14:51.806299 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:14:51.806310 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-31 03:14:51.806319 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:14:51.806346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-31 03:14:51.806395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-31 03:14:51.806406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-31 03:14:51.806416 | orchestrator | 2026-03-31 03:14:51.806425 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-03-31 03:14:51.806435 | orchestrator | Tuesday 31 March 2026 03:14:34 +0000 (0:00:03.432) 0:01:21.744 ********* 2026-03-31 03:14:51.806445 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:14:51.806453 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:14:51.806462 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:14:51.806472 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:14:51.806481 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:14:51.806489 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:14:51.806499 | orchestrator | 2026-03-31 03:14:51.806506 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-03-31 03:14:51.806512 | orchestrator | Tuesday 31 March 2026 03:14:37 +0000 (0:00:02.419) 0:01:24.163 ********* 2026-03-31 03:14:51.806521 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:14:51.806529 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:14:51.806538 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:14:51.806547 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:14:51.806556 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:14:51.806565 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:14:51.806574 | orchestrator | 2026-03-31 03:14:51.806583 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-03-31 03:14:51.806592 | orchestrator | Tuesday 31 March 2026 03:14:39 +0000 (0:00:02.440) 0:01:26.604 ********* 2026-03-31 03:14:51.806598 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:14:51.806604 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:14:51.806609 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:14:51.806614 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:14:51.806619 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:14:51.806625 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:14:51.806637 | orchestrator | 2026-03-31 03:14:51.806643 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-03-31 03:14:51.806649 | orchestrator | Tuesday 31 March 2026 03:14:42 +0000 (0:00:02.587) 0:01:29.191 ********* 2026-03-31 03:14:51.806655 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:14:51.806662 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:14:51.806668 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:14:51.806674 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:14:51.806680 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:14:51.806685 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:14:51.806691 | orchestrator | 2026-03-31 03:14:51.806697 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-03-31 03:14:51.806703 | orchestrator | Tuesday 31 March 2026 03:14:44 +0000 (0:00:02.504) 0:01:31.696 ********* 2026-03-31 03:14:51.806709 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:14:51.806716 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:14:51.806721 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:14:51.806728 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:14:51.806734 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:14:51.806739 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:14:51.806745 | orchestrator | 2026-03-31 03:14:51.806751 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-03-31 03:14:51.806757 | orchestrator | Tuesday 31 March 2026 03:14:46 +0000 (0:00:02.414) 0:01:34.110 ********* 2026-03-31 03:14:51.806764 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:14:51.806783 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:14:51.806789 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:14:51.806795 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:14:51.806801 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:14:51.806807 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:14:51.806813 | orchestrator | 2026-03-31 03:14:51.806819 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-03-31 03:14:51.806875 | orchestrator | Tuesday 31 March 2026 03:14:49 +0000 (0:00:02.314) 0:01:36.424 ********* 2026-03-31 03:14:51.806881 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-31 03:14:51.806887 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-31 03:14:51.806892 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:14:51.806898 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:14:51.806903 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-31 03:14:51.806908 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:14:51.806914 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-31 03:14:51.806919 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:14:51.806930 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-31 03:14:56.424439 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:14:56.424525 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-31 03:14:56.424536 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:14:56.424544 | orchestrator | 2026-03-31 03:14:56.424551 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-03-31 03:14:56.424559 | orchestrator | Tuesday 31 March 2026 03:14:51 +0000 (0:00:02.510) 0:01:38.935 ********* 2026-03-31 03:14:56.424569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-31 03:14:56.424596 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:14:56.424604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-31 03:14:56.424611 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:14:56.424618 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-31 03:14:56.424626 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:14:56.424643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-31 03:14:56.424650 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:14:56.424670 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-31 03:14:56.424748 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:14:56.424757 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-31 03:14:56.424764 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:14:56.424770 | orchestrator | 2026-03-31 03:14:56.424777 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-03-31 03:14:56.424784 | orchestrator | Tuesday 31 March 2026 03:14:54 +0000 (0:00:02.347) 0:01:41.282 ********* 2026-03-31 03:14:56.424791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-31 03:14:56.424803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-31 03:14:56.424810 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:14:56.424817 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:14:56.424848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-31 03:15:24.462547 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:15:24.462635 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-31 03:15:24.462647 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:15:24.462655 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-31 03:15:24.462663 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:15:24.462670 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-31 03:15:24.462677 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:15:24.462684 | orchestrator | 2026-03-31 03:15:24.462691 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-03-31 03:15:24.462699 | orchestrator | Tuesday 31 March 2026 03:14:56 +0000 (0:00:02.271) 0:01:43.554 ********* 2026-03-31 03:15:24.462706 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:15:24.462712 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:15:24.462719 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:15:24.462726 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:15:24.462739 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:15:24.462745 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:15:24.462752 | orchestrator | 2026-03-31 03:15:24.462759 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-03-31 03:15:24.462765 | orchestrator | Tuesday 31 March 2026 03:14:58 +0000 (0:00:02.231) 0:01:45.785 ********* 2026-03-31 03:15:24.462772 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:15:24.462778 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:15:24.462785 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:15:24.462792 | orchestrator | changed: [testbed-node-3] 2026-03-31 03:15:24.462798 | orchestrator | changed: [testbed-node-4] 2026-03-31 03:15:24.462805 | orchestrator | changed: [testbed-node-5] 2026-03-31 03:15:24.462826 | orchestrator | 2026-03-31 03:15:24.462833 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-03-31 03:15:24.462840 | orchestrator | Tuesday 31 March 2026 03:15:02 +0000 (0:00:03.988) 0:01:49.773 ********* 2026-03-31 03:15:24.462846 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:15:24.462912 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:15:24.462919 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:15:24.462925 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:15:24.462932 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:15:24.462939 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:15:24.462945 | orchestrator | 2026-03-31 03:15:24.462952 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-03-31 03:15:24.462958 | orchestrator | Tuesday 31 March 2026 03:15:05 +0000 (0:00:02.506) 0:01:52.280 ********* 2026-03-31 03:15:24.462965 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:15:24.462971 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:15:24.462978 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:15:24.462984 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:15:24.462991 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:15:24.462997 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:15:24.463004 | orchestrator | 2026-03-31 03:15:24.463010 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-03-31 03:15:24.463030 | orchestrator | Tuesday 31 March 2026 03:15:07 +0000 (0:00:02.467) 0:01:54.747 ********* 2026-03-31 03:15:24.463038 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:15:24.463044 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:15:24.463051 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:15:24.463057 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:15:24.463064 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:15:24.463070 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:15:24.463077 | orchestrator | 2026-03-31 03:15:24.463083 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-03-31 03:15:24.463090 | orchestrator | Tuesday 31 March 2026 03:15:10 +0000 (0:00:02.417) 0:01:57.165 ********* 2026-03-31 03:15:24.463096 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:15:24.463103 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:15:24.463111 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:15:24.463119 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:15:24.463125 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:15:24.463132 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:15:24.463139 | orchestrator | 2026-03-31 03:15:24.463146 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-03-31 03:15:24.463153 | orchestrator | Tuesday 31 March 2026 03:15:12 +0000 (0:00:02.439) 0:01:59.604 ********* 2026-03-31 03:15:24.463160 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:15:24.463167 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:15:24.463174 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:15:24.463181 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:15:24.463188 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:15:24.463195 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:15:24.463202 | orchestrator | 2026-03-31 03:15:24.463209 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-03-31 03:15:24.463216 | orchestrator | Tuesday 31 March 2026 03:15:14 +0000 (0:00:02.304) 0:02:01.909 ********* 2026-03-31 03:15:24.463223 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:15:24.463230 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:15:24.463237 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:15:24.463244 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:15:24.463251 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:15:24.463257 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:15:24.463264 | orchestrator | 2026-03-31 03:15:24.463271 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-03-31 03:15:24.463279 | orchestrator | Tuesday 31 March 2026 03:15:17 +0000 (0:00:02.447) 0:02:04.357 ********* 2026-03-31 03:15:24.463292 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:15:24.463299 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:15:24.463306 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:15:24.463312 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:15:24.463319 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:15:24.463326 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:15:24.463333 | orchestrator | 2026-03-31 03:15:24.463340 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-03-31 03:15:24.463347 | orchestrator | Tuesday 31 March 2026 03:15:19 +0000 (0:00:02.476) 0:02:06.833 ********* 2026-03-31 03:15:24.463354 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-31 03:15:24.463362 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:15:24.463369 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-31 03:15:24.463376 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:15:24.463383 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-31 03:15:24.463390 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:15:24.463397 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-31 03:15:24.463404 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:15:24.463411 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-31 03:15:24.463418 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:15:24.463475 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-31 03:15:24.463484 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:15:24.463490 | orchestrator | 2026-03-31 03:15:24.463496 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-03-31 03:15:24.463502 | orchestrator | Tuesday 31 March 2026 03:15:21 +0000 (0:00:02.255) 0:02:09.089 ********* 2026-03-31 03:15:24.463509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-31 03:15:24.463518 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:15:24.463530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-31 03:15:27.095129 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:15:27.095251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-31 03:15:27.095276 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:15:27.095292 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-31 03:15:27.095309 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:15:27.095345 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-31 03:15:27.095362 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:15:27.095372 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-31 03:15:27.095381 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:15:27.095390 | orchestrator | 2026-03-31 03:15:27.095400 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-03-31 03:15:27.095410 | orchestrator | Tuesday 31 March 2026 03:15:24 +0000 (0:00:02.503) 0:02:11.592 ********* 2026-03-31 03:15:27.095439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-31 03:15:27.095473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-31 03:15:27.095487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-31 03:15:27.095497 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-31 03:15:27.095507 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-31 03:15:27.095533 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-31 03:17:41.839317 | orchestrator | 2026-03-31 03:17:41.839410 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-31 03:17:41.839425 | orchestrator | Tuesday 31 March 2026 03:15:27 +0000 (0:00:02.633) 0:02:14.226 ********* 2026-03-31 03:17:41.839435 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:17:41.839446 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:17:41.839456 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:17:41.839466 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:17:41.839476 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:17:41.839486 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:17:41.839498 | orchestrator | 2026-03-31 03:17:41.839509 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-03-31 03:17:41.839519 | orchestrator | Tuesday 31 March 2026 03:15:27 +0000 (0:00:00.810) 0:02:15.037 ********* 2026-03-31 03:17:41.839525 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:17:41.839531 | orchestrator | 2026-03-31 03:17:41.839537 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-03-31 03:17:41.839543 | orchestrator | Tuesday 31 March 2026 03:15:29 +0000 (0:00:01.982) 0:02:17.019 ********* 2026-03-31 03:17:41.839549 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:17:41.839555 | orchestrator | 2026-03-31 03:17:41.839561 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-03-31 03:17:41.839566 | orchestrator | Tuesday 31 March 2026 03:15:32 +0000 (0:00:02.194) 0:02:19.214 ********* 2026-03-31 03:17:41.839572 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:17:41.839579 | orchestrator | 2026-03-31 03:17:41.839588 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-31 03:17:41.839597 | orchestrator | Tuesday 31 March 2026 03:16:13 +0000 (0:00:41.667) 0:03:00.881 ********* 2026-03-31 03:17:41.839611 | orchestrator | 2026-03-31 03:17:41.839623 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-31 03:17:41.839632 | orchestrator | Tuesday 31 March 2026 03:16:13 +0000 (0:00:00.133) 0:03:01.014 ********* 2026-03-31 03:17:41.839641 | orchestrator | 2026-03-31 03:17:41.839651 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-31 03:17:41.839660 | orchestrator | Tuesday 31 March 2026 03:16:13 +0000 (0:00:00.079) 0:03:01.094 ********* 2026-03-31 03:17:41.839668 | orchestrator | 2026-03-31 03:17:41.839676 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-31 03:17:41.839701 | orchestrator | Tuesday 31 March 2026 03:16:14 +0000 (0:00:00.070) 0:03:01.164 ********* 2026-03-31 03:17:41.839711 | orchestrator | 2026-03-31 03:17:41.839782 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-31 03:17:41.839792 | orchestrator | Tuesday 31 March 2026 03:16:14 +0000 (0:00:00.072) 0:03:01.236 ********* 2026-03-31 03:17:41.839801 | orchestrator | 2026-03-31 03:17:41.839810 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-31 03:17:41.839819 | orchestrator | Tuesday 31 March 2026 03:16:14 +0000 (0:00:00.068) 0:03:01.305 ********* 2026-03-31 03:17:41.839828 | orchestrator | 2026-03-31 03:17:41.839858 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-03-31 03:17:41.839868 | orchestrator | Tuesday 31 March 2026 03:16:14 +0000 (0:00:00.072) 0:03:01.378 ********* 2026-03-31 03:17:41.839877 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:17:41.839886 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:17:41.839896 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:17:41.839907 | orchestrator | 2026-03-31 03:17:41.839916 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-03-31 03:17:41.839925 | orchestrator | Tuesday 31 March 2026 03:16:39 +0000 (0:00:24.825) 0:03:26.203 ********* 2026-03-31 03:17:41.839934 | orchestrator | changed: [testbed-node-4] 2026-03-31 03:17:41.839942 | orchestrator | changed: [testbed-node-5] 2026-03-31 03:17:41.839951 | orchestrator | changed: [testbed-node-3] 2026-03-31 03:17:41.839961 | orchestrator | 2026-03-31 03:17:41.839970 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 03:17:41.839983 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-31 03:17:41.839996 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-31 03:17:41.840005 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-31 03:17:41.840015 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-31 03:17:41.840026 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-31 03:17:41.840035 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-31 03:17:41.840044 | orchestrator | 2026-03-31 03:17:41.840052 | orchestrator | 2026-03-31 03:17:41.840061 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 03:17:41.840070 | orchestrator | Tuesday 31 March 2026 03:17:41 +0000 (0:01:02.248) 0:04:28.451 ********* 2026-03-31 03:17:41.840080 | orchestrator | =============================================================================== 2026-03-31 03:17:41.840089 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 62.25s 2026-03-31 03:17:41.840098 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 41.67s 2026-03-31 03:17:41.840107 | orchestrator | neutron : Restart neutron-server container ----------------------------- 24.83s 2026-03-31 03:17:41.840138 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.29s 2026-03-31 03:17:41.840148 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.67s 2026-03-31 03:17:41.840157 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.44s 2026-03-31 03:17:41.840166 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 4.14s 2026-03-31 03:17:41.840175 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.99s 2026-03-31 03:17:41.840185 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.97s 2026-03-31 03:17:41.840194 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.43s 2026-03-31 03:17:41.840203 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.39s 2026-03-31 03:17:41.840213 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.38s 2026-03-31 03:17:41.840221 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.34s 2026-03-31 03:17:41.840230 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.30s 2026-03-31 03:17:41.840239 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.02s 2026-03-31 03:17:41.840261 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 2.82s 2026-03-31 03:17:41.840271 | orchestrator | neutron : Check neutron containers -------------------------------------- 2.63s 2026-03-31 03:17:41.840281 | orchestrator | neutron : Copying over existing policy file ----------------------------- 2.63s 2026-03-31 03:17:41.840289 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 2.62s 2026-03-31 03:17:41.840298 | orchestrator | neutron : Copying over sriov_agent.ini ---------------------------------- 2.59s 2026-03-31 03:17:46.858788 | orchestrator | 2026-03-31 03:17:46 | INFO  | Task ee65dcdb-d21c-4c31-b01b-c7f0e3b47407 (nova) was prepared for execution. 2026-03-31 03:17:46.858892 | orchestrator | 2026-03-31 03:17:46 | INFO  | It takes a moment until task ee65dcdb-d21c-4c31-b01b-c7f0e3b47407 (nova) has been started and output is visible here. 2026-03-31 03:19:38.463601 | orchestrator | 2026-03-31 03:19:38.463703 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-31 03:19:38.463715 | orchestrator | 2026-03-31 03:19:38.463722 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-03-31 03:19:38.463730 | orchestrator | Tuesday 31 March 2026 03:17:51 +0000 (0:00:00.289) 0:00:00.289 ********* 2026-03-31 03:19:38.463737 | orchestrator | changed: [testbed-manager] 2026-03-31 03:19:38.463745 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:19:38.463751 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:19:38.463757 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:19:38.463764 | orchestrator | changed: [testbed-node-3] 2026-03-31 03:19:38.463770 | orchestrator | changed: [testbed-node-4] 2026-03-31 03:19:38.463777 | orchestrator | changed: [testbed-node-5] 2026-03-31 03:19:38.463784 | orchestrator | 2026-03-31 03:19:38.463790 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-31 03:19:38.463797 | orchestrator | Tuesday 31 March 2026 03:17:52 +0000 (0:00:01.125) 0:00:01.415 ********* 2026-03-31 03:19:38.463804 | orchestrator | changed: [testbed-manager] 2026-03-31 03:19:38.463811 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:19:38.463818 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:19:38.463825 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:19:38.463831 | orchestrator | changed: [testbed-node-3] 2026-03-31 03:19:38.463837 | orchestrator | changed: [testbed-node-4] 2026-03-31 03:19:38.463843 | orchestrator | changed: [testbed-node-5] 2026-03-31 03:19:38.463850 | orchestrator | 2026-03-31 03:19:38.463856 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-31 03:19:38.463863 | orchestrator | Tuesday 31 March 2026 03:17:53 +0000 (0:00:00.884) 0:00:02.299 ********* 2026-03-31 03:19:38.463869 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-03-31 03:19:38.463876 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-03-31 03:19:38.463884 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-03-31 03:19:38.463891 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-03-31 03:19:38.463898 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-03-31 03:19:38.463905 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-03-31 03:19:38.463926 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-03-31 03:19:38.463933 | orchestrator | 2026-03-31 03:19:38.463940 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-03-31 03:19:38.463948 | orchestrator | 2026-03-31 03:19:38.463955 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-31 03:19:38.463963 | orchestrator | Tuesday 31 March 2026 03:17:54 +0000 (0:00:00.783) 0:00:03.083 ********* 2026-03-31 03:19:38.463971 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 03:19:38.463977 | orchestrator | 2026-03-31 03:19:38.463984 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-03-31 03:19:38.464015 | orchestrator | Tuesday 31 March 2026 03:17:55 +0000 (0:00:00.806) 0:00:03.889 ********* 2026-03-31 03:19:38.464023 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-03-31 03:19:38.464031 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-03-31 03:19:38.464039 | orchestrator | 2026-03-31 03:19:38.464046 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-03-31 03:19:38.464053 | orchestrator | Tuesday 31 March 2026 03:17:59 +0000 (0:00:03.928) 0:00:07.818 ********* 2026-03-31 03:19:38.464060 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-31 03:19:38.464067 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-31 03:19:38.464073 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:19:38.464079 | orchestrator | 2026-03-31 03:19:38.464085 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-31 03:19:38.464091 | orchestrator | Tuesday 31 March 2026 03:18:02 +0000 (0:00:03.849) 0:00:11.668 ********* 2026-03-31 03:19:38.464098 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:19:38.464104 | orchestrator | 2026-03-31 03:19:38.464110 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-03-31 03:19:38.464116 | orchestrator | Tuesday 31 March 2026 03:18:03 +0000 (0:00:00.706) 0:00:12.375 ********* 2026-03-31 03:19:38.464172 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:19:38.464179 | orchestrator | 2026-03-31 03:19:38.464186 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-03-31 03:19:38.464192 | orchestrator | Tuesday 31 March 2026 03:18:04 +0000 (0:00:01.227) 0:00:13.602 ********* 2026-03-31 03:19:38.464198 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:19:38.464205 | orchestrator | 2026-03-31 03:19:38.464211 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-31 03:19:38.464217 | orchestrator | Tuesday 31 March 2026 03:18:07 +0000 (0:00:02.650) 0:00:16.252 ********* 2026-03-31 03:19:38.464224 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:19:38.464232 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:19:38.464238 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:19:38.464244 | orchestrator | 2026-03-31 03:19:38.464251 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-31 03:19:38.464257 | orchestrator | Tuesday 31 March 2026 03:18:07 +0000 (0:00:00.329) 0:00:16.582 ********* 2026-03-31 03:19:38.464264 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:19:38.464271 | orchestrator | 2026-03-31 03:19:38.464278 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-03-31 03:19:38.464285 | orchestrator | Tuesday 31 March 2026 03:18:38 +0000 (0:00:30.671) 0:00:47.253 ********* 2026-03-31 03:19:38.464291 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:19:38.464296 | orchestrator | 2026-03-31 03:19:38.464302 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-31 03:19:38.464308 | orchestrator | Tuesday 31 March 2026 03:18:51 +0000 (0:00:13.268) 0:01:00.522 ********* 2026-03-31 03:19:38.464313 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:19:38.464319 | orchestrator | 2026-03-31 03:19:38.464325 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-31 03:19:38.464346 | orchestrator | Tuesday 31 March 2026 03:19:02 +0000 (0:00:10.507) 0:01:11.030 ********* 2026-03-31 03:19:38.464373 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:19:38.464380 | orchestrator | 2026-03-31 03:19:38.464387 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-03-31 03:19:38.464394 | orchestrator | Tuesday 31 March 2026 03:19:02 +0000 (0:00:00.725) 0:01:11.755 ********* 2026-03-31 03:19:38.464400 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:19:38.464407 | orchestrator | 2026-03-31 03:19:38.464414 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-31 03:19:38.464418 | orchestrator | Tuesday 31 March 2026 03:19:03 +0000 (0:00:00.506) 0:01:12.261 ********* 2026-03-31 03:19:38.464423 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 03:19:38.464436 | orchestrator | 2026-03-31 03:19:38.464440 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-31 03:19:38.464444 | orchestrator | Tuesday 31 March 2026 03:19:04 +0000 (0:00:00.744) 0:01:13.006 ********* 2026-03-31 03:19:38.464447 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:19:38.464451 | orchestrator | 2026-03-31 03:19:38.464455 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-31 03:19:38.464459 | orchestrator | Tuesday 31 March 2026 03:19:20 +0000 (0:00:16.332) 0:01:29.339 ********* 2026-03-31 03:19:38.464463 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:19:38.464467 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:19:38.464471 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:19:38.464475 | orchestrator | 2026-03-31 03:19:38.464479 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-03-31 03:19:38.464483 | orchestrator | 2026-03-31 03:19:38.464487 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-31 03:19:38.464491 | orchestrator | Tuesday 31 March 2026 03:19:20 +0000 (0:00:00.322) 0:01:29.661 ********* 2026-03-31 03:19:38.464495 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 03:19:38.464499 | orchestrator | 2026-03-31 03:19:38.464502 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-03-31 03:19:38.464506 | orchestrator | Tuesday 31 March 2026 03:19:21 +0000 (0:00:00.782) 0:01:30.444 ********* 2026-03-31 03:19:38.464510 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:19:38.464514 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:19:38.464518 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:19:38.464522 | orchestrator | 2026-03-31 03:19:38.464526 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-03-31 03:19:38.464530 | orchestrator | Tuesday 31 March 2026 03:19:23 +0000 (0:00:01.894) 0:01:32.339 ********* 2026-03-31 03:19:38.464533 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:19:38.464537 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:19:38.464541 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:19:38.464545 | orchestrator | 2026-03-31 03:19:38.464549 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-31 03:19:38.464553 | orchestrator | Tuesday 31 March 2026 03:19:25 +0000 (0:00:02.018) 0:01:34.357 ********* 2026-03-31 03:19:38.464557 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:19:38.464560 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:19:38.464564 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:19:38.464568 | orchestrator | 2026-03-31 03:19:38.464572 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-31 03:19:38.464576 | orchestrator | Tuesday 31 March 2026 03:19:26 +0000 (0:00:00.571) 0:01:34.928 ********* 2026-03-31 03:19:38.464580 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-31 03:19:38.464584 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:19:38.464588 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-31 03:19:38.464591 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:19:38.464595 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-31 03:19:38.464600 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-03-31 03:19:38.464603 | orchestrator | 2026-03-31 03:19:38.464607 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-31 03:19:38.464611 | orchestrator | Tuesday 31 March 2026 03:19:33 +0000 (0:00:06.847) 0:01:41.776 ********* 2026-03-31 03:19:38.464615 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:19:38.464619 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:19:38.464623 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:19:38.464627 | orchestrator | 2026-03-31 03:19:38.464631 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-31 03:19:38.464635 | orchestrator | Tuesday 31 March 2026 03:19:33 +0000 (0:00:00.326) 0:01:42.102 ********* 2026-03-31 03:19:38.464639 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-31 03:19:38.464646 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:19:38.464650 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-31 03:19:38.464654 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:19:38.464658 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-31 03:19:38.464662 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:19:38.464666 | orchestrator | 2026-03-31 03:19:38.464670 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-31 03:19:38.464674 | orchestrator | Tuesday 31 March 2026 03:19:34 +0000 (0:00:01.175) 0:01:43.277 ********* 2026-03-31 03:19:38.464678 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:19:38.464681 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:19:38.464685 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:19:38.464689 | orchestrator | 2026-03-31 03:19:38.464693 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-03-31 03:19:38.464697 | orchestrator | Tuesday 31 March 2026 03:19:34 +0000 (0:00:00.467) 0:01:43.744 ********* 2026-03-31 03:19:38.464701 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:19:38.464705 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:19:38.464708 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:19:38.464712 | orchestrator | 2026-03-31 03:19:38.464716 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-03-31 03:19:38.464720 | orchestrator | Tuesday 31 March 2026 03:19:35 +0000 (0:00:00.907) 0:01:44.652 ********* 2026-03-31 03:19:38.464724 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:19:38.464728 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:19:38.464735 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:20:52.227801 | orchestrator | 2026-03-31 03:20:52.228039 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-03-31 03:20:52.228070 | orchestrator | Tuesday 31 March 2026 03:19:38 +0000 (0:00:02.556) 0:01:47.208 ********* 2026-03-31 03:20:52.228090 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:20:52.228112 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:20:52.228133 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:20:52.228155 | orchestrator | 2026-03-31 03:20:52.228176 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-31 03:20:52.228197 | orchestrator | Tuesday 31 March 2026 03:19:59 +0000 (0:00:20.682) 0:02:07.890 ********* 2026-03-31 03:20:52.228218 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:20:52.228238 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:20:52.228259 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:20:52.228280 | orchestrator | 2026-03-31 03:20:52.228302 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-31 03:20:52.228325 | orchestrator | Tuesday 31 March 2026 03:20:10 +0000 (0:00:11.440) 0:02:19.330 ********* 2026-03-31 03:20:52.228345 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:20:52.228364 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:20:52.228383 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:20:52.228404 | orchestrator | 2026-03-31 03:20:52.228423 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-03-31 03:20:52.228444 | orchestrator | Tuesday 31 March 2026 03:20:11 +0000 (0:00:01.223) 0:02:20.554 ********* 2026-03-31 03:20:52.228465 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:20:52.228487 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:20:52.228506 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:20:52.228526 | orchestrator | 2026-03-31 03:20:52.228545 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-03-31 03:20:52.228565 | orchestrator | Tuesday 31 March 2026 03:20:23 +0000 (0:00:11.473) 0:02:32.027 ********* 2026-03-31 03:20:52.228585 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:20:52.228605 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:20:52.228623 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:20:52.228643 | orchestrator | 2026-03-31 03:20:52.228663 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-31 03:20:52.228719 | orchestrator | Tuesday 31 March 2026 03:20:24 +0000 (0:00:01.125) 0:02:33.153 ********* 2026-03-31 03:20:52.228739 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:20:52.228759 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:20:52.228778 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:20:52.228797 | orchestrator | 2026-03-31 03:20:52.228854 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-03-31 03:20:52.228874 | orchestrator | 2026-03-31 03:20:52.228893 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-31 03:20:52.228912 | orchestrator | Tuesday 31 March 2026 03:20:24 +0000 (0:00:00.339) 0:02:33.492 ********* 2026-03-31 03:20:52.228932 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 03:20:52.228952 | orchestrator | 2026-03-31 03:20:52.228972 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-03-31 03:20:52.228991 | orchestrator | Tuesday 31 March 2026 03:20:25 +0000 (0:00:00.788) 0:02:34.281 ********* 2026-03-31 03:20:52.229010 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-03-31 03:20:52.229032 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-03-31 03:20:52.229052 | orchestrator | 2026-03-31 03:20:52.229072 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-03-31 03:20:52.229090 | orchestrator | Tuesday 31 March 2026 03:20:28 +0000 (0:00:02.983) 0:02:37.265 ********* 2026-03-31 03:20:52.229111 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-03-31 03:20:52.229186 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-03-31 03:20:52.229210 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-03-31 03:20:52.229232 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-03-31 03:20:52.229251 | orchestrator | 2026-03-31 03:20:52.229271 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-03-31 03:20:52.229290 | orchestrator | Tuesday 31 March 2026 03:20:34 +0000 (0:00:05.948) 0:02:43.213 ********* 2026-03-31 03:20:52.229308 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-31 03:20:52.229326 | orchestrator | 2026-03-31 03:20:52.229344 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-03-31 03:20:52.229364 | orchestrator | Tuesday 31 March 2026 03:20:37 +0000 (0:00:02.958) 0:02:46.171 ********* 2026-03-31 03:20:52.229384 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-31 03:20:52.229403 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-03-31 03:20:52.229421 | orchestrator | 2026-03-31 03:20:52.229441 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-03-31 03:20:52.229459 | orchestrator | Tuesday 31 March 2026 03:20:40 +0000 (0:00:03.495) 0:02:49.666 ********* 2026-03-31 03:20:52.229478 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-31 03:20:52.229536 | orchestrator | 2026-03-31 03:20:52.229558 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-03-31 03:20:52.229570 | orchestrator | Tuesday 31 March 2026 03:20:43 +0000 (0:00:02.996) 0:02:52.663 ********* 2026-03-31 03:20:52.229581 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-03-31 03:20:52.229591 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-03-31 03:20:52.229602 | orchestrator | 2026-03-31 03:20:52.229625 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-31 03:20:52.229674 | orchestrator | Tuesday 31 March 2026 03:20:50 +0000 (0:00:06.981) 0:02:59.645 ********* 2026-03-31 03:20:52.229704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-31 03:20:52.229757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-31 03:20:52.229781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-31 03:20:52.229852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-31 03:20:56.912024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-31 03:20:56.912131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-31 03:20:56.912147 | orchestrator | 2026-03-31 03:20:56.912161 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-03-31 03:20:56.912173 | orchestrator | Tuesday 31 March 2026 03:20:52 +0000 (0:00:01.326) 0:03:00.972 ********* 2026-03-31 03:20:56.912185 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:20:56.912197 | orchestrator | 2026-03-31 03:20:56.912208 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-03-31 03:20:56.912219 | orchestrator | Tuesday 31 March 2026 03:20:52 +0000 (0:00:00.142) 0:03:01.115 ********* 2026-03-31 03:20:56.912229 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:20:56.912240 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:20:56.912250 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:20:56.912261 | orchestrator | 2026-03-31 03:20:56.912272 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-03-31 03:20:56.912282 | orchestrator | Tuesday 31 March 2026 03:20:52 +0000 (0:00:00.332) 0:03:01.447 ********* 2026-03-31 03:20:56.912293 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-31 03:20:56.912303 | orchestrator | 2026-03-31 03:20:56.912314 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-03-31 03:20:56.912324 | orchestrator | Tuesday 31 March 2026 03:20:53 +0000 (0:00:00.731) 0:03:02.179 ********* 2026-03-31 03:20:56.912335 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:20:56.912345 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:20:56.912356 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:20:56.912366 | orchestrator | 2026-03-31 03:20:56.912377 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-31 03:20:56.912388 | orchestrator | Tuesday 31 March 2026 03:20:53 +0000 (0:00:00.555) 0:03:02.735 ********* 2026-03-31 03:20:56.912399 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 03:20:56.912410 | orchestrator | 2026-03-31 03:20:56.912421 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-31 03:20:56.912432 | orchestrator | Tuesday 31 March 2026 03:20:54 +0000 (0:00:00.610) 0:03:03.345 ********* 2026-03-31 03:20:56.912491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-31 03:20:56.912551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-31 03:20:56.912567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-31 03:20:56.912579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-31 03:20:56.912592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-31 03:20:56.912617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-31 03:20:56.912629 | orchestrator | 2026-03-31 03:20:56.912647 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-31 03:20:58.652828 | orchestrator | Tuesday 31 March 2026 03:20:56 +0000 (0:00:02.309) 0:03:05.654 ********* 2026-03-31 03:20:58.652941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-31 03:20:58.652965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 03:20:58.652981 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:20:58.652995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-31 03:20:58.653045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 03:20:58.653058 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:20:58.653090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-31 03:20:58.653104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 03:20:58.653116 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:20:58.653127 | orchestrator | 2026-03-31 03:20:58.653139 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-31 03:20:58.653151 | orchestrator | Tuesday 31 March 2026 03:20:57 +0000 (0:00:00.895) 0:03:06.550 ********* 2026-03-31 03:20:58.653163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-31 03:20:58.653184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 03:20:58.653195 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:20:58.653222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-31 03:21:00.982654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 03:21:00.982765 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:21:00.982846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-31 03:21:00.982892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 03:21:00.982907 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:21:00.982920 | orchestrator | 2026-03-31 03:21:00.982935 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-03-31 03:21:00.982948 | orchestrator | Tuesday 31 March 2026 03:20:58 +0000 (0:00:00.851) 0:03:07.402 ********* 2026-03-31 03:21:00.982979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-31 03:21:00.983017 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-31 03:21:00.983035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-31 03:21:00.983066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-31 03:21:00.983082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-31 03:21:00.983105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-31 03:21:07.627564 | orchestrator | 2026-03-31 03:21:07.627687 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-03-31 03:21:07.627706 | orchestrator | Tuesday 31 March 2026 03:21:00 +0000 (0:00:02.330) 0:03:09.732 ********* 2026-03-31 03:21:07.627725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-31 03:21:07.627877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-31 03:21:07.627915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-31 03:21:07.627954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-31 03:21:07.627969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-31 03:21:07.627991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-31 03:21:07.628003 | orchestrator | 2026-03-31 03:21:07.628014 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-03-31 03:21:07.628025 | orchestrator | Tuesday 31 March 2026 03:21:07 +0000 (0:00:06.037) 0:03:15.769 ********* 2026-03-31 03:21:07.628042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-31 03:21:07.628055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 03:21:07.628066 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:21:07.628089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-31 03:21:12.080340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 03:21:12.080465 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:21:12.080487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-31 03:21:12.080520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 03:21:12.080533 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:21:12.080544 | orchestrator | 2026-03-31 03:21:12.080557 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-03-31 03:21:12.080569 | orchestrator | Tuesday 31 March 2026 03:21:07 +0000 (0:00:00.610) 0:03:16.380 ********* 2026-03-31 03:21:12.080580 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:21:12.080591 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:21:12.080601 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:21:12.080612 | orchestrator | 2026-03-31 03:21:12.080623 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-03-31 03:21:12.080634 | orchestrator | Tuesday 31 March 2026 03:21:09 +0000 (0:00:01.576) 0:03:17.956 ********* 2026-03-31 03:21:12.080644 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:21:12.080655 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:21:12.080666 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:21:12.080677 | orchestrator | 2026-03-31 03:21:12.080687 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-03-31 03:21:12.080698 | orchestrator | Tuesday 31 March 2026 03:21:09 +0000 (0:00:00.341) 0:03:18.298 ********* 2026-03-31 03:21:12.080759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-31 03:21:12.080797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-31 03:21:12.080817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-31 03:21:12.080830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-31 03:21:12.080850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-31 03:21:12.080873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-31 03:21:59.134891 | orchestrator | 2026-03-31 03:21:59.134998 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-31 03:21:59.135012 | orchestrator | Tuesday 31 March 2026 03:21:11 +0000 (0:00:02.074) 0:03:20.372 ********* 2026-03-31 03:21:59.135021 | orchestrator | 2026-03-31 03:21:59.135029 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-31 03:21:59.135037 | orchestrator | Tuesday 31 March 2026 03:21:11 +0000 (0:00:00.167) 0:03:20.540 ********* 2026-03-31 03:21:59.135046 | orchestrator | 2026-03-31 03:21:59.135054 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-31 03:21:59.135076 | orchestrator | Tuesday 31 March 2026 03:21:11 +0000 (0:00:00.143) 0:03:20.684 ********* 2026-03-31 03:21:59.135084 | orchestrator | 2026-03-31 03:21:59.135101 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-03-31 03:21:59.135109 | orchestrator | Tuesday 31 March 2026 03:21:12 +0000 (0:00:00.146) 0:03:20.831 ********* 2026-03-31 03:21:59.135117 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:21:59.135126 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:21:59.135134 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:21:59.135142 | orchestrator | 2026-03-31 03:21:59.135150 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-03-31 03:21:59.135158 | orchestrator | Tuesday 31 March 2026 03:21:36 +0000 (0:00:24.279) 0:03:45.110 ********* 2026-03-31 03:21:59.135166 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:21:59.135174 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:21:59.135182 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:21:59.135190 | orchestrator | 2026-03-31 03:21:59.135198 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-03-31 03:21:59.135205 | orchestrator | 2026-03-31 03:21:59.135213 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-31 03:21:59.135221 | orchestrator | Tuesday 31 March 2026 03:21:46 +0000 (0:00:10.320) 0:03:55.431 ********* 2026-03-31 03:21:59.135230 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 03:21:59.135239 | orchestrator | 2026-03-31 03:21:59.135262 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-31 03:21:59.135270 | orchestrator | Tuesday 31 March 2026 03:21:47 +0000 (0:00:01.318) 0:03:56.750 ********* 2026-03-31 03:21:59.135278 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:21:59.135304 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:21:59.135314 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:21:59.135327 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:21:59.135342 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:21:59.135354 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:21:59.135365 | orchestrator | 2026-03-31 03:21:59.135378 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-03-31 03:21:59.135387 | orchestrator | Tuesday 31 March 2026 03:21:48 +0000 (0:00:00.842) 0:03:57.592 ********* 2026-03-31 03:21:59.135395 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:21:59.135402 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:21:59.135410 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:21:59.135418 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 03:21:59.135426 | orchestrator | 2026-03-31 03:21:59.135436 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-31 03:21:59.135445 | orchestrator | Tuesday 31 March 2026 03:21:49 +0000 (0:00:00.879) 0:03:58.472 ********* 2026-03-31 03:21:59.135454 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-03-31 03:21:59.135464 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-03-31 03:21:59.135473 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-03-31 03:21:59.135481 | orchestrator | 2026-03-31 03:21:59.135491 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-31 03:21:59.135500 | orchestrator | Tuesday 31 March 2026 03:21:50 +0000 (0:00:01.008) 0:03:59.480 ********* 2026-03-31 03:21:59.135509 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-03-31 03:21:59.135518 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-03-31 03:21:59.135527 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-03-31 03:21:59.135536 | orchestrator | 2026-03-31 03:21:59.135546 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-31 03:21:59.135591 | orchestrator | Tuesday 31 March 2026 03:21:51 +0000 (0:00:01.186) 0:04:00.667 ********* 2026-03-31 03:21:59.135599 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-03-31 03:21:59.135607 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:21:59.135615 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-03-31 03:21:59.135623 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:21:59.135631 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-03-31 03:21:59.135638 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:21:59.135646 | orchestrator | 2026-03-31 03:21:59.135654 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-03-31 03:21:59.135662 | orchestrator | Tuesday 31 March 2026 03:21:52 +0000 (0:00:00.585) 0:04:01.252 ********* 2026-03-31 03:21:59.135670 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-31 03:21:59.135678 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-31 03:21:59.135686 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-31 03:21:59.135694 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-31 03:21:59.135702 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:21:59.135710 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-31 03:21:59.135718 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-31 03:21:59.135726 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:21:59.135748 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-31 03:21:59.135756 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-31 03:21:59.135764 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-31 03:21:59.135779 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:21:59.135787 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-31 03:21:59.135795 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-31 03:21:59.135803 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-31 03:21:59.135810 | orchestrator | 2026-03-31 03:21:59.135818 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-03-31 03:21:59.135826 | orchestrator | Tuesday 31 March 2026 03:21:53 +0000 (0:00:01.149) 0:04:02.401 ********* 2026-03-31 03:21:59.135834 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:21:59.135842 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:21:59.135850 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:21:59.135857 | orchestrator | changed: [testbed-node-3] 2026-03-31 03:21:59.135865 | orchestrator | changed: [testbed-node-4] 2026-03-31 03:21:59.135873 | orchestrator | changed: [testbed-node-5] 2026-03-31 03:21:59.135881 | orchestrator | 2026-03-31 03:21:59.135888 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-03-31 03:21:59.135896 | orchestrator | Tuesday 31 March 2026 03:21:54 +0000 (0:00:01.059) 0:04:03.461 ********* 2026-03-31 03:21:59.135904 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:21:59.135912 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:21:59.135920 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:21:59.135927 | orchestrator | changed: [testbed-node-3] 2026-03-31 03:21:59.135935 | orchestrator | changed: [testbed-node-4] 2026-03-31 03:21:59.135943 | orchestrator | changed: [testbed-node-5] 2026-03-31 03:21:59.135951 | orchestrator | 2026-03-31 03:21:59.135958 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-31 03:21:59.135966 | orchestrator | Tuesday 31 March 2026 03:21:57 +0000 (0:00:02.561) 0:04:06.023 ********* 2026-03-31 03:21:59.135982 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-31 03:21:59.135995 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-31 03:21:59.136010 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-31 03:22:01.257270 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-31 03:22:01.257378 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-31 03:22:01.257412 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-31 03:22:01.257426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-31 03:22:01.257438 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-31 03:22:01.257452 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-31 03:22:01.257520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-31 03:22:01.257616 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-31 03:22:01.257648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-31 03:22:01.257668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-31 03:22:01.257689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-31 03:22:01.257703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-31 03:22:01.257733 | orchestrator | 2026-03-31 03:22:01.257747 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-31 03:22:01.257760 | orchestrator | Tuesday 31 March 2026 03:21:59 +0000 (0:00:02.497) 0:04:08.520 ********* 2026-03-31 03:22:01.257774 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 03:22:01.257789 | orchestrator | 2026-03-31 03:22:01.257802 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-31 03:22:01.257825 | orchestrator | Tuesday 31 March 2026 03:22:01 +0000 (0:00:01.491) 0:04:10.011 ********* 2026-03-31 03:22:04.679943 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-31 03:22:04.680087 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-31 03:22:04.680110 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-31 03:22:04.680126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-31 03:22:04.680162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-31 03:22:04.680212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-31 03:22:04.680222 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-31 03:22:04.680236 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-31 03:22:04.680244 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-31 03:22:04.680252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-31 03:22:04.680267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-31 03:22:04.680275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-31 03:22:04.680290 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-31 03:22:06.687019 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-31 03:22:06.687124 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-31 03:22:06.687141 | orchestrator | 2026-03-31 03:22:06.687155 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-31 03:22:06.687168 | orchestrator | Tuesday 31 March 2026 03:22:05 +0000 (0:00:03.779) 0:04:13.790 ********* 2026-03-31 03:22:06.687203 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-31 03:22:06.687219 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-31 03:22:06.687231 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-31 03:22:06.687243 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:22:06.687280 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-31 03:22:06.687294 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-31 03:22:06.687307 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-31 03:22:06.687326 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:22:06.687337 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-31 03:22:06.687349 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-31 03:22:06.687369 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-31 03:22:08.173224 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:22:08.173344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-31 03:22:08.173364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-31 03:22:08.173399 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:22:08.173412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-31 03:22:08.173424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-31 03:22:08.173435 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:22:08.173447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-31 03:22:08.173458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-31 03:22:08.173469 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:22:08.173480 | orchestrator | 2026-03-31 03:22:08.173492 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-31 03:22:08.173504 | orchestrator | Tuesday 31 March 2026 03:22:06 +0000 (0:00:01.735) 0:04:15.526 ********* 2026-03-31 03:22:08.173607 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-31 03:22:08.173639 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-31 03:22:08.173653 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-31 03:22:08.173664 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:22:08.173676 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-31 03:22:08.173688 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-31 03:22:08.173713 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-31 03:22:15.969082 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:22:15.969210 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-31 03:22:15.969234 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-31 03:22:15.969249 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-31 03:22:15.969264 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:22:15.969278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-31 03:22:15.969293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-31 03:22:15.969307 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:22:15.969357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-31 03:22:15.969398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-31 03:22:15.969414 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:22:15.969428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-31 03:22:15.969441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-31 03:22:15.969455 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:22:15.969469 | orchestrator | 2026-03-31 03:22:15.969483 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-31 03:22:15.969523 | orchestrator | Tuesday 31 March 2026 03:22:08 +0000 (0:00:02.181) 0:04:17.707 ********* 2026-03-31 03:22:15.969537 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:22:15.969550 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:22:15.969563 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:22:15.969576 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 03:22:15.969590 | orchestrator | 2026-03-31 03:22:15.969604 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-03-31 03:22:15.969618 | orchestrator | Tuesday 31 March 2026 03:22:10 +0000 (0:00:01.158) 0:04:18.865 ********* 2026-03-31 03:22:15.969632 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-31 03:22:15.969645 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-31 03:22:15.969660 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-31 03:22:15.969673 | orchestrator | 2026-03-31 03:22:15.969687 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-03-31 03:22:15.969701 | orchestrator | Tuesday 31 March 2026 03:22:11 +0000 (0:00:01.165) 0:04:20.030 ********* 2026-03-31 03:22:15.969715 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-31 03:22:15.969728 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-31 03:22:15.969741 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-31 03:22:15.969755 | orchestrator | 2026-03-31 03:22:15.969768 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-03-31 03:22:15.969792 | orchestrator | Tuesday 31 March 2026 03:22:12 +0000 (0:00:01.035) 0:04:21.066 ********* 2026-03-31 03:22:15.969806 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:22:15.969820 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:22:15.969834 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:22:15.969847 | orchestrator | 2026-03-31 03:22:15.969860 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-03-31 03:22:15.969875 | orchestrator | Tuesday 31 March 2026 03:22:12 +0000 (0:00:00.533) 0:04:21.599 ********* 2026-03-31 03:22:15.969889 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:22:15.969902 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:22:15.969915 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:22:15.969929 | orchestrator | 2026-03-31 03:22:15.969942 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-03-31 03:22:15.969956 | orchestrator | Tuesday 31 March 2026 03:22:13 +0000 (0:00:00.519) 0:04:22.119 ********* 2026-03-31 03:22:15.969971 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-31 03:22:15.969986 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-31 03:22:15.969999 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-31 03:22:15.970068 | orchestrator | 2026-03-31 03:22:15.970088 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-03-31 03:22:15.970107 | orchestrator | Tuesday 31 March 2026 03:22:14 +0000 (0:00:01.411) 0:04:23.530 ********* 2026-03-31 03:22:15.970130 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-31 03:22:34.919804 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-31 03:22:34.919898 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-31 03:22:34.919913 | orchestrator | 2026-03-31 03:22:34.919925 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-03-31 03:22:34.919934 | orchestrator | Tuesday 31 March 2026 03:22:15 +0000 (0:00:01.189) 0:04:24.719 ********* 2026-03-31 03:22:34.919940 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-31 03:22:34.919946 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-31 03:22:34.919952 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-31 03:22:34.919958 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-03-31 03:22:34.919964 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-03-31 03:22:34.919970 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-03-31 03:22:34.919975 | orchestrator | 2026-03-31 03:22:34.919981 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-03-31 03:22:34.919987 | orchestrator | Tuesday 31 March 2026 03:22:19 +0000 (0:00:03.714) 0:04:28.434 ********* 2026-03-31 03:22:34.919993 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:22:34.920002 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:22:34.920011 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:22:34.920021 | orchestrator | 2026-03-31 03:22:34.920030 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-03-31 03:22:34.920039 | orchestrator | Tuesday 31 March 2026 03:22:20 +0000 (0:00:00.334) 0:04:28.768 ********* 2026-03-31 03:22:34.920048 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:22:34.920057 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:22:34.920066 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:22:34.920077 | orchestrator | 2026-03-31 03:22:34.920087 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-03-31 03:22:34.920096 | orchestrator | Tuesday 31 March 2026 03:22:20 +0000 (0:00:00.580) 0:04:29.348 ********* 2026-03-31 03:22:34.920107 | orchestrator | changed: [testbed-node-3] 2026-03-31 03:22:34.920113 | orchestrator | changed: [testbed-node-4] 2026-03-31 03:22:34.920118 | orchestrator | changed: [testbed-node-5] 2026-03-31 03:22:34.920124 | orchestrator | 2026-03-31 03:22:34.920130 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-03-31 03:22:34.920155 | orchestrator | Tuesday 31 March 2026 03:22:21 +0000 (0:00:01.354) 0:04:30.703 ********* 2026-03-31 03:22:34.920162 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-31 03:22:34.920169 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-31 03:22:34.920175 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-31 03:22:34.920182 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-31 03:22:34.920188 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-31 03:22:34.920194 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-31 03:22:34.920199 | orchestrator | 2026-03-31 03:22:34.920205 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-03-31 03:22:34.920211 | orchestrator | Tuesday 31 March 2026 03:22:25 +0000 (0:00:03.498) 0:04:34.201 ********* 2026-03-31 03:22:34.920216 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-31 03:22:34.920222 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-31 03:22:34.920228 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-31 03:22:34.920234 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-31 03:22:34.920239 | orchestrator | changed: [testbed-node-4] 2026-03-31 03:22:34.920245 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-31 03:22:34.920250 | orchestrator | changed: [testbed-node-3] 2026-03-31 03:22:34.920258 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-31 03:22:34.920267 | orchestrator | changed: [testbed-node-5] 2026-03-31 03:22:34.920276 | orchestrator | 2026-03-31 03:22:34.920286 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-03-31 03:22:34.920295 | orchestrator | Tuesday 31 March 2026 03:22:28 +0000 (0:00:03.466) 0:04:37.668 ********* 2026-03-31 03:22:34.920304 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:22:34.920314 | orchestrator | 2026-03-31 03:22:34.920325 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-03-31 03:22:34.920335 | orchestrator | Tuesday 31 March 2026 03:22:29 +0000 (0:00:00.142) 0:04:37.811 ********* 2026-03-31 03:22:34.920344 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:22:34.920352 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:22:34.920358 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:22:34.920365 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:22:34.920372 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:22:34.920378 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:22:34.920384 | orchestrator | 2026-03-31 03:22:34.920390 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-03-31 03:22:34.920397 | orchestrator | Tuesday 31 March 2026 03:22:29 +0000 (0:00:00.909) 0:04:38.720 ********* 2026-03-31 03:22:34.920403 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-31 03:22:34.920410 | orchestrator | 2026-03-31 03:22:34.920484 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-03-31 03:22:34.920495 | orchestrator | Tuesday 31 March 2026 03:22:30 +0000 (0:00:00.766) 0:04:39.486 ********* 2026-03-31 03:22:34.920502 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:22:34.920522 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:22:34.920529 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:22:34.920534 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:22:34.920540 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:22:34.920545 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:22:34.920551 | orchestrator | 2026-03-31 03:22:34.920556 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-03-31 03:22:34.920569 | orchestrator | Tuesday 31 March 2026 03:22:31 +0000 (0:00:00.934) 0:04:40.421 ********* 2026-03-31 03:22:34.920578 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-31 03:22:34.920587 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-31 03:22:34.920594 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-31 03:22:34.920601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-31 03:22:34.920617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-31 03:22:41.793297 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-31 03:22:41.793367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-31 03:22:41.793374 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-31 03:22:41.793379 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-31 03:22:41.793383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-31 03:22:41.793387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-31 03:22:41.793457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-31 03:22:41.793480 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-31 03:22:41.793486 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-31 03:22:41.793490 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-31 03:22:41.793494 | orchestrator | 2026-03-31 03:22:41.793500 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-03-31 03:22:41.793504 | orchestrator | Tuesday 31 March 2026 03:22:35 +0000 (0:00:03.626) 0:04:44.048 ********* 2026-03-31 03:22:41.793509 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-31 03:22:41.793517 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-31 03:22:41.793529 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-31 03:22:42.270305 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-31 03:22:42.270466 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-31 03:22:42.270488 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-31 03:22:42.270502 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-31 03:22:42.270556 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-31 03:22:42.270588 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-31 03:22:42.270602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-31 03:22:42.270615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-31 03:22:42.270627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-31 03:22:42.270638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-31 03:22:42.270663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-31 03:22:42.270675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-31 03:22:42.270687 | orchestrator | 2026-03-31 03:22:42.270700 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-03-31 03:22:42.270719 | orchestrator | Tuesday 31 March 2026 03:22:42 +0000 (0:00:06.971) 0:04:51.019 ********* 2026-03-31 03:23:04.136042 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:23:04.136141 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:23:04.136153 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:23:04.136161 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:23:04.136168 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:23:04.136175 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:23:04.136183 | orchestrator | 2026-03-31 03:23:04.136192 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-03-31 03:23:04.136200 | orchestrator | Tuesday 31 March 2026 03:22:43 +0000 (0:00:01.504) 0:04:52.523 ********* 2026-03-31 03:23:04.136208 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-31 03:23:04.136216 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-31 03:23:04.136224 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-31 03:23:04.136231 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-31 03:23:04.136238 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-31 03:23:04.136245 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-31 03:23:04.136252 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-31 03:23:04.136260 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:23:04.136267 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-31 03:23:04.136274 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:23:04.136282 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-31 03:23:04.136289 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:23:04.136296 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-31 03:23:04.136323 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-31 03:23:04.136368 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-31 03:23:04.136383 | orchestrator | 2026-03-31 03:23:04.136395 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-03-31 03:23:04.136406 | orchestrator | Tuesday 31 March 2026 03:22:47 +0000 (0:00:03.654) 0:04:56.177 ********* 2026-03-31 03:23:04.136418 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:23:04.136429 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:23:04.136442 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:23:04.136455 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:23:04.136467 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:23:04.136478 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:23:04.136490 | orchestrator | 2026-03-31 03:23:04.136502 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-03-31 03:23:04.136513 | orchestrator | Tuesday 31 March 2026 03:22:48 +0000 (0:00:00.632) 0:04:56.810 ********* 2026-03-31 03:23:04.136526 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-31 03:23:04.136540 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-31 03:23:04.136552 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-31 03:23:04.136565 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-31 03:23:04.136572 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-31 03:23:04.136594 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-31 03:23:04.136603 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-31 03:23:04.136612 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-31 03:23:04.136620 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-31 03:23:04.136628 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-31 03:23:04.136637 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:23:04.136645 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-31 03:23:04.136653 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:23:04.136662 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-31 03:23:04.136670 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:23:04.136678 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-31 03:23:04.136686 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-31 03:23:04.136711 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-31 03:23:04.136720 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-31 03:23:04.136728 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-31 03:23:04.136735 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-31 03:23:04.136742 | orchestrator | 2026-03-31 03:23:04.136749 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-03-31 03:23:04.136765 | orchestrator | Tuesday 31 March 2026 03:22:53 +0000 (0:00:05.410) 0:05:02.220 ********* 2026-03-31 03:23:04.136773 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-31 03:23:04.136780 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-31 03:23:04.136787 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-31 03:23:04.136794 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-31 03:23:04.136801 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-31 03:23:04.136809 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-31 03:23:04.136816 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-31 03:23:04.136823 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-31 03:23:04.136830 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-31 03:23:04.136837 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-31 03:23:04.136844 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-31 03:23:04.136851 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-31 03:23:04.136858 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-31 03:23:04.136865 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:23:04.136873 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-31 03:23:04.136880 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:23:04.136887 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-31 03:23:04.136894 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:23:04.136902 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-31 03:23:04.136909 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-31 03:23:04.136916 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-31 03:23:04.136923 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-31 03:23:04.136930 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-31 03:23:04.136937 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-31 03:23:04.136944 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-31 03:23:04.136951 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-31 03:23:04.136962 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-31 03:23:04.136970 | orchestrator | 2026-03-31 03:23:04.136977 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-03-31 03:23:04.136984 | orchestrator | Tuesday 31 March 2026 03:23:00 +0000 (0:00:06.993) 0:05:09.214 ********* 2026-03-31 03:23:04.136991 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:23:04.136998 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:23:04.137005 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:23:04.137013 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:23:04.137020 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:23:04.137027 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:23:04.137034 | orchestrator | 2026-03-31 03:23:04.137046 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-03-31 03:23:04.137064 | orchestrator | Tuesday 31 March 2026 03:23:01 +0000 (0:00:00.879) 0:05:10.093 ********* 2026-03-31 03:23:04.137076 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:23:04.137087 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:23:04.137098 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:23:04.137110 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:23:04.137121 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:23:04.137131 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:23:04.137143 | orchestrator | 2026-03-31 03:23:04.137155 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-03-31 03:23:04.137168 | orchestrator | Tuesday 31 March 2026 03:23:02 +0000 (0:00:00.674) 0:05:10.767 ********* 2026-03-31 03:23:04.137180 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:23:04.137193 | orchestrator | changed: [testbed-node-3] 2026-03-31 03:23:04.137205 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:23:04.137218 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:23:04.137229 | orchestrator | changed: [testbed-node-4] 2026-03-31 03:23:04.137242 | orchestrator | changed: [testbed-node-5] 2026-03-31 03:23:04.137249 | orchestrator | 2026-03-31 03:23:04.137263 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-03-31 03:23:05.354576 | orchestrator | Tuesday 31 March 2026 03:23:04 +0000 (0:00:02.107) 0:05:12.875 ********* 2026-03-31 03:23:05.354662 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-31 03:23:05.354676 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-31 03:23:05.354685 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-31 03:23:05.354693 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:23:05.354715 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-31 03:23:05.354744 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-31 03:23:05.354787 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-31 03:23:05.354796 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:23:05.354804 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-31 03:23:05.354812 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-31 03:23:05.354824 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-31 03:23:05.354838 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:23:05.354847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-31 03:23:05.354860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-31 03:23:08.852308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-31 03:23:08.852523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-31 03:23:08.852540 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:23:08.852554 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:23:08.852566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-31 03:23:08.852578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-31 03:23:08.852614 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:23:08.852626 | orchestrator | 2026-03-31 03:23:08.852638 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-03-31 03:23:08.852650 | orchestrator | Tuesday 31 March 2026 03:23:05 +0000 (0:00:01.354) 0:05:14.229 ********* 2026-03-31 03:23:08.852677 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-31 03:23:08.852688 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-31 03:23:08.852699 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:23:08.852709 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-31 03:23:08.852720 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-31 03:23:08.852730 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:23:08.852741 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-31 03:23:08.852752 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-31 03:23:08.852762 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:23:08.852773 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-31 03:23:08.852783 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-31 03:23:08.852794 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:23:08.852804 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-31 03:23:08.852815 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-31 03:23:08.852825 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:23:08.852837 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-31 03:23:08.852850 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-31 03:23:08.852863 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:23:08.852875 | orchestrator | 2026-03-31 03:23:08.852888 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-03-31 03:23:08.852901 | orchestrator | Tuesday 31 March 2026 03:23:06 +0000 (0:00:00.896) 0:05:15.126 ********* 2026-03-31 03:23:08.852934 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-31 03:23:08.852950 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-31 03:23:08.852972 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-31 03:23:08.852992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-31 03:23:08.853006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-31 03:23:08.853029 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-31 03:24:02.155088 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-31 03:24:02.155225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-31 03:24:02.155258 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-31 03:24:02.155267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-31 03:24:02.155286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-31 03:24:02.155295 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-31 03:24:02.155318 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-31 03:24:02.155326 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-31 03:24:02.155339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-31 03:24:02.155347 | orchestrator | 2026-03-31 03:24:02.155355 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-31 03:24:02.155363 | orchestrator | Tuesday 31 March 2026 03:23:09 +0000 (0:00:02.643) 0:05:17.769 ********* 2026-03-31 03:24:02.155369 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:24:02.155377 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:24:02.155383 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:24:02.155390 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:24:02.155396 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:24:02.155403 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:24:02.155409 | orchestrator | 2026-03-31 03:24:02.155416 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-31 03:24:02.155423 | orchestrator | Tuesday 31 March 2026 03:23:09 +0000 (0:00:00.893) 0:05:18.663 ********* 2026-03-31 03:24:02.155429 | orchestrator | 2026-03-31 03:24:02.155436 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-31 03:24:02.155442 | orchestrator | Tuesday 31 March 2026 03:23:10 +0000 (0:00:00.153) 0:05:18.816 ********* 2026-03-31 03:24:02.155449 | orchestrator | 2026-03-31 03:24:02.155460 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-31 03:24:02.155467 | orchestrator | Tuesday 31 March 2026 03:23:10 +0000 (0:00:00.155) 0:05:18.971 ********* 2026-03-31 03:24:02.155474 | orchestrator | 2026-03-31 03:24:02.155481 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-31 03:24:02.155487 | orchestrator | Tuesday 31 March 2026 03:23:10 +0000 (0:00:00.144) 0:05:19.116 ********* 2026-03-31 03:24:02.155494 | orchestrator | 2026-03-31 03:24:02.155500 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-31 03:24:02.155507 | orchestrator | Tuesday 31 March 2026 03:23:10 +0000 (0:00:00.144) 0:05:19.261 ********* 2026-03-31 03:24:02.155514 | orchestrator | 2026-03-31 03:24:02.155520 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-31 03:24:02.155527 | orchestrator | Tuesday 31 March 2026 03:23:10 +0000 (0:00:00.318) 0:05:19.580 ********* 2026-03-31 03:24:02.155534 | orchestrator | 2026-03-31 03:24:02.155540 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-03-31 03:24:02.155547 | orchestrator | Tuesday 31 March 2026 03:23:10 +0000 (0:00:00.149) 0:05:19.729 ********* 2026-03-31 03:24:02.155554 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:24:02.155560 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:24:02.155567 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:24:02.155574 | orchestrator | 2026-03-31 03:24:02.155580 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-03-31 03:24:02.155587 | orchestrator | Tuesday 31 March 2026 03:23:21 +0000 (0:00:10.215) 0:05:29.945 ********* 2026-03-31 03:24:02.155593 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:24:02.155600 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:24:02.155607 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:24:02.155618 | orchestrator | 2026-03-31 03:24:02.155625 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-03-31 03:24:02.155631 | orchestrator | Tuesday 31 March 2026 03:23:35 +0000 (0:00:14.523) 0:05:44.469 ********* 2026-03-31 03:24:02.155638 | orchestrator | changed: [testbed-node-5] 2026-03-31 03:24:02.155645 | orchestrator | changed: [testbed-node-3] 2026-03-31 03:24:02.155651 | orchestrator | changed: [testbed-node-4] 2026-03-31 03:24:02.155658 | orchestrator | 2026-03-31 03:24:02.155669 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-03-31 03:26:31.101084 | orchestrator | Tuesday 31 March 2026 03:24:02 +0000 (0:00:26.428) 0:06:10.897 ********* 2026-03-31 03:26:31.101186 | orchestrator | changed: [testbed-node-5] 2026-03-31 03:26:31.101199 | orchestrator | changed: [testbed-node-4] 2026-03-31 03:26:31.101208 | orchestrator | changed: [testbed-node-3] 2026-03-31 03:26:31.101217 | orchestrator | 2026-03-31 03:26:31.101227 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-03-31 03:26:31.101236 | orchestrator | Tuesday 31 March 2026 03:24:42 +0000 (0:00:40.586) 0:06:51.483 ********* 2026-03-31 03:26:31.101245 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2026-03-31 03:26:31.101255 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2026-03-31 03:26:31.101263 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2026-03-31 03:26:31.101272 | orchestrator | changed: [testbed-node-3] 2026-03-31 03:26:31.101281 | orchestrator | changed: [testbed-node-4] 2026-03-31 03:26:31.101289 | orchestrator | changed: [testbed-node-5] 2026-03-31 03:26:31.101298 | orchestrator | 2026-03-31 03:26:31.101307 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-03-31 03:26:31.101315 | orchestrator | Tuesday 31 March 2026 03:24:49 +0000 (0:00:06.288) 0:06:57.772 ********* 2026-03-31 03:26:31.101324 | orchestrator | changed: [testbed-node-3] 2026-03-31 03:26:31.101332 | orchestrator | changed: [testbed-node-4] 2026-03-31 03:26:31.101341 | orchestrator | changed: [testbed-node-5] 2026-03-31 03:26:31.101349 | orchestrator | 2026-03-31 03:26:31.101359 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-03-31 03:26:31.101367 | orchestrator | Tuesday 31 March 2026 03:24:49 +0000 (0:00:00.772) 0:06:58.545 ********* 2026-03-31 03:26:31.101376 | orchestrator | changed: [testbed-node-4] 2026-03-31 03:26:31.101385 | orchestrator | changed: [testbed-node-5] 2026-03-31 03:26:31.101393 | orchestrator | changed: [testbed-node-3] 2026-03-31 03:26:31.101402 | orchestrator | 2026-03-31 03:26:31.101411 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-03-31 03:26:31.101420 | orchestrator | Tuesday 31 March 2026 03:25:21 +0000 (0:00:31.424) 0:07:29.969 ********* 2026-03-31 03:26:31.101428 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:26:31.101437 | orchestrator | 2026-03-31 03:26:31.101445 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-03-31 03:26:31.101454 | orchestrator | Tuesday 31 March 2026 03:25:21 +0000 (0:00:00.150) 0:07:30.120 ********* 2026-03-31 03:26:31.101462 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:26:31.101471 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:26:31.101479 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:26:31.101488 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:26:31.101496 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:26:31.101505 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-03-31 03:26:31.101516 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-31 03:26:31.101525 | orchestrator | 2026-03-31 03:26:31.101533 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-03-31 03:26:31.101541 | orchestrator | Tuesday 31 March 2026 03:25:44 +0000 (0:00:22.950) 0:07:53.070 ********* 2026-03-31 03:26:31.101570 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:26:31.101579 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:26:31.101588 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:26:31.101596 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:26:31.101605 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:26:31.101613 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:26:31.101622 | orchestrator | 2026-03-31 03:26:31.101643 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-03-31 03:26:31.101652 | orchestrator | Tuesday 31 March 2026 03:25:53 +0000 (0:00:09.672) 0:08:02.743 ********* 2026-03-31 03:26:31.101662 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:26:31.101678 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:26:31.101697 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:26:31.101721 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:26:31.101736 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:26:31.101750 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2026-03-31 03:26:31.101765 | orchestrator | 2026-03-31 03:26:31.101780 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-31 03:26:31.101794 | orchestrator | Tuesday 31 March 2026 03:25:58 +0000 (0:00:04.456) 0:08:07.199 ********* 2026-03-31 03:26:31.101807 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-31 03:26:31.101845 | orchestrator | 2026-03-31 03:26:31.101859 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-31 03:26:31.101872 | orchestrator | Tuesday 31 March 2026 03:26:11 +0000 (0:00:12.651) 0:08:19.851 ********* 2026-03-31 03:26:31.101886 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-31 03:26:31.101901 | orchestrator | 2026-03-31 03:26:31.101915 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-03-31 03:26:31.101928 | orchestrator | Tuesday 31 March 2026 03:26:12 +0000 (0:00:01.576) 0:08:21.427 ********* 2026-03-31 03:26:31.101943 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:26:31.101959 | orchestrator | 2026-03-31 03:26:31.101973 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-03-31 03:26:31.101988 | orchestrator | Tuesday 31 March 2026 03:26:14 +0000 (0:00:01.755) 0:08:23.183 ********* 2026-03-31 03:26:31.101996 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-31 03:26:31.102005 | orchestrator | 2026-03-31 03:26:31.102013 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-03-31 03:26:31.102078 | orchestrator | Tuesday 31 March 2026 03:26:25 +0000 (0:00:11.058) 0:08:34.242 ********* 2026-03-31 03:26:31.102087 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:26:31.102097 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:26:31.102106 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:26:31.102132 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:26:31.102141 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:26:31.102150 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:26:31.102158 | orchestrator | 2026-03-31 03:26:31.102167 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-03-31 03:26:31.102175 | orchestrator | 2026-03-31 03:26:31.102184 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-03-31 03:26:31.102193 | orchestrator | Tuesday 31 March 2026 03:26:27 +0000 (0:00:02.049) 0:08:36.291 ********* 2026-03-31 03:26:31.102201 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:26:31.102210 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:26:31.102218 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:26:31.102226 | orchestrator | 2026-03-31 03:26:31.102235 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-03-31 03:26:31.102243 | orchestrator | 2026-03-31 03:26:31.102252 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-03-31 03:26:31.102260 | orchestrator | Tuesday 31 March 2026 03:26:28 +0000 (0:00:00.977) 0:08:37.269 ********* 2026-03-31 03:26:31.102269 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:26:31.102288 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:26:31.102297 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:26:31.102306 | orchestrator | 2026-03-31 03:26:31.102314 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-03-31 03:26:31.102323 | orchestrator | 2026-03-31 03:26:31.102332 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-03-31 03:26:31.102340 | orchestrator | Tuesday 31 March 2026 03:26:29 +0000 (0:00:00.798) 0:08:38.067 ********* 2026-03-31 03:26:31.102349 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-03-31 03:26:31.102357 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-31 03:26:31.102366 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-31 03:26:31.102375 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-03-31 03:26:31.102384 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-03-31 03:26:31.102393 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-03-31 03:26:31.102401 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:26:31.102410 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-03-31 03:26:31.102418 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-31 03:26:31.102427 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-31 03:26:31.102435 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-03-31 03:26:31.102444 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-03-31 03:26:31.102453 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-03-31 03:26:31.102461 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:26:31.102470 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-03-31 03:26:31.102479 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-31 03:26:31.102487 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-31 03:26:31.102495 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-03-31 03:26:31.102504 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-03-31 03:26:31.102513 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-03-31 03:26:31.102521 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:26:31.102530 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-03-31 03:26:31.102538 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-31 03:26:31.102547 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-31 03:26:31.102561 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-03-31 03:26:31.102570 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-03-31 03:26:31.102579 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-03-31 03:26:31.102587 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:26:31.102595 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-03-31 03:26:31.102604 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-31 03:26:31.102613 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-31 03:26:31.102621 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-03-31 03:26:31.102630 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-03-31 03:26:31.102638 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-03-31 03:26:31.102647 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:26:31.102655 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-03-31 03:26:31.102664 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-31 03:26:31.102673 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-31 03:26:31.102681 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-03-31 03:26:31.102695 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-03-31 03:26:31.102704 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-03-31 03:26:31.102713 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:26:31.102721 | orchestrator | 2026-03-31 03:26:31.102730 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-03-31 03:26:31.102738 | orchestrator | 2026-03-31 03:26:31.102747 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-03-31 03:26:31.102755 | orchestrator | Tuesday 31 March 2026 03:26:30 +0000 (0:00:01.549) 0:08:39.617 ********* 2026-03-31 03:26:31.102764 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-03-31 03:26:31.102773 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-03-31 03:26:31.102781 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:26:31.102796 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-03-31 03:26:33.498976 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-03-31 03:26:33.499076 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:26:33.499089 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-03-31 03:26:33.499118 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-03-31 03:26:33.499127 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:26:33.499136 | orchestrator | 2026-03-31 03:26:33.499145 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-03-31 03:26:33.499154 | orchestrator | 2026-03-31 03:26:33.499162 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-03-31 03:26:33.499171 | orchestrator | Tuesday 31 March 2026 03:26:31 +0000 (0:00:00.621) 0:08:40.239 ********* 2026-03-31 03:26:33.499179 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:26:33.499187 | orchestrator | 2026-03-31 03:26:33.499195 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-03-31 03:26:33.499202 | orchestrator | 2026-03-31 03:26:33.499210 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-03-31 03:26:33.499218 | orchestrator | Tuesday 31 March 2026 03:26:32 +0000 (0:00:01.013) 0:08:41.252 ********* 2026-03-31 03:26:33.499226 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:26:33.499234 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:26:33.499242 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:26:33.499250 | orchestrator | 2026-03-31 03:26:33.499259 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 03:26:33.499272 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 03:26:33.499287 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-03-31 03:26:33.499299 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-03-31 03:26:33.499312 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-03-31 03:26:33.499324 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-31 03:26:33.499335 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-31 03:26:33.499347 | orchestrator | testbed-node-5 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-03-31 03:26:33.499358 | orchestrator | 2026-03-31 03:26:33.499370 | orchestrator | 2026-03-31 03:26:33.499382 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 03:26:33.499394 | orchestrator | Tuesday 31 March 2026 03:26:32 +0000 (0:00:00.485) 0:08:41.738 ********* 2026-03-31 03:26:33.499435 | orchestrator | =============================================================================== 2026-03-31 03:26:33.499449 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 40.59s 2026-03-31 03:26:33.499463 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 31.42s 2026-03-31 03:26:33.499496 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 30.67s 2026-03-31 03:26:33.499512 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 26.43s 2026-03-31 03:26:33.499525 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 24.28s 2026-03-31 03:26:33.499541 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.95s 2026-03-31 03:26:33.499558 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 20.68s 2026-03-31 03:26:33.499572 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 16.33s 2026-03-31 03:26:33.499587 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 14.52s 2026-03-31 03:26:33.499601 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 13.27s 2026-03-31 03:26:33.499614 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.65s 2026-03-31 03:26:33.499628 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.47s 2026-03-31 03:26:33.499641 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.44s 2026-03-31 03:26:33.499653 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.06s 2026-03-31 03:26:33.499666 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.51s 2026-03-31 03:26:33.499680 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.32s 2026-03-31 03:26:33.499694 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 10.22s 2026-03-31 03:26:33.499709 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 9.67s 2026-03-31 03:26:33.499722 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 6.99s 2026-03-31 03:26:33.499737 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 6.98s 2026-03-31 03:26:35.957968 | orchestrator | 2026-03-31 03:26:35 | INFO  | Task 439b1531-72b9-47b2-94d1-690ee2b97df6 (horizon) was prepared for execution. 2026-03-31 03:26:35.958123 | orchestrator | 2026-03-31 03:26:35 | INFO  | It takes a moment until task 439b1531-72b9-47b2-94d1-690ee2b97df6 (horizon) has been started and output is visible here. 2026-03-31 03:26:43.711136 | orchestrator | 2026-03-31 03:26:43.711267 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-31 03:26:43.711286 | orchestrator | 2026-03-31 03:26:43.711298 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-31 03:26:43.711310 | orchestrator | Tuesday 31 March 2026 03:26:40 +0000 (0:00:00.269) 0:00:00.269 ********* 2026-03-31 03:26:43.711321 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:26:43.711333 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:26:43.711344 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:26:43.711354 | orchestrator | 2026-03-31 03:26:43.711365 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-31 03:26:43.711376 | orchestrator | Tuesday 31 March 2026 03:26:40 +0000 (0:00:00.347) 0:00:00.616 ********* 2026-03-31 03:26:43.711387 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-03-31 03:26:43.711399 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-03-31 03:26:43.711409 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-03-31 03:26:43.711420 | orchestrator | 2026-03-31 03:26:43.711430 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-03-31 03:26:43.711441 | orchestrator | 2026-03-31 03:26:43.711451 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-31 03:26:43.711488 | orchestrator | Tuesday 31 March 2026 03:26:41 +0000 (0:00:00.485) 0:00:01.102 ********* 2026-03-31 03:26:43.711500 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 03:26:43.711511 | orchestrator | 2026-03-31 03:26:43.711522 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-03-31 03:26:43.711533 | orchestrator | Tuesday 31 March 2026 03:26:41 +0000 (0:00:00.579) 0:00:01.681 ********* 2026-03-31 03:26:43.711567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-31 03:26:43.711607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-31 03:26:43.711638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-31 03:26:43.711654 | orchestrator | 2026-03-31 03:26:43.711667 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-03-31 03:26:43.711679 | orchestrator | Tuesday 31 March 2026 03:26:43 +0000 (0:00:01.262) 0:00:02.943 ********* 2026-03-31 03:26:43.711693 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:26:43.711705 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:26:43.711716 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:26:43.711728 | orchestrator | 2026-03-31 03:26:43.711740 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-31 03:26:43.711752 | orchestrator | Tuesday 31 March 2026 03:26:43 +0000 (0:00:00.520) 0:00:03.464 ********* 2026-03-31 03:26:43.711772 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-31 03:26:50.257231 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-31 03:26:50.257341 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-03-31 03:26:50.257359 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-03-31 03:26:50.257401 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-03-31 03:26:50.257416 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-03-31 03:26:50.257428 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-03-31 03:26:50.257436 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-03-31 03:26:50.257444 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-31 03:26:50.257452 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-31 03:26:50.257460 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-03-31 03:26:50.257467 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-03-31 03:26:50.257475 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-03-31 03:26:50.257483 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-03-31 03:26:50.257491 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-03-31 03:26:50.257499 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-03-31 03:26:50.257506 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-31 03:26:50.257514 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-31 03:26:50.257522 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-03-31 03:26:50.257529 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-03-31 03:26:50.257537 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-03-31 03:26:50.257545 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-03-31 03:26:50.257553 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-03-31 03:26:50.257561 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-03-31 03:26:50.257570 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-03-31 03:26:50.257592 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-03-31 03:26:50.257601 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-03-31 03:26:50.257609 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-03-31 03:26:50.257617 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-03-31 03:26:50.257629 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-03-31 03:26:50.257642 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-03-31 03:26:50.257656 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-03-31 03:26:50.257668 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-03-31 03:26:50.257684 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-03-31 03:26:50.257692 | orchestrator | 2026-03-31 03:26:50.257701 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-31 03:26:50.257710 | orchestrator | Tuesday 31 March 2026 03:26:44 +0000 (0:00:00.830) 0:00:04.295 ********* 2026-03-31 03:26:50.257718 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:26:50.257727 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:26:50.257735 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:26:50.257742 | orchestrator | 2026-03-31 03:26:50.257750 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-31 03:26:50.257759 | orchestrator | Tuesday 31 March 2026 03:26:44 +0000 (0:00:00.346) 0:00:04.641 ********* 2026-03-31 03:26:50.257853 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:26:50.257870 | orchestrator | 2026-03-31 03:26:50.257896 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-31 03:26:50.257905 | orchestrator | Tuesday 31 March 2026 03:26:45 +0000 (0:00:00.345) 0:00:04.987 ********* 2026-03-31 03:26:50.257913 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:26:50.257920 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:26:50.257928 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:26:50.257936 | orchestrator | 2026-03-31 03:26:50.257944 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-31 03:26:50.257951 | orchestrator | Tuesday 31 March 2026 03:26:45 +0000 (0:00:00.341) 0:00:05.328 ********* 2026-03-31 03:26:50.257959 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:26:50.257967 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:26:50.257975 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:26:50.257982 | orchestrator | 2026-03-31 03:26:50.257990 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-31 03:26:50.257998 | orchestrator | Tuesday 31 March 2026 03:26:45 +0000 (0:00:00.323) 0:00:05.652 ********* 2026-03-31 03:26:50.258011 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:26:50.258096 | orchestrator | 2026-03-31 03:26:50.258112 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-31 03:26:50.258122 | orchestrator | Tuesday 31 March 2026 03:26:45 +0000 (0:00:00.146) 0:00:05.798 ********* 2026-03-31 03:26:50.258130 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:26:50.258138 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:26:50.258146 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:26:50.258154 | orchestrator | 2026-03-31 03:26:50.258161 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-31 03:26:50.258178 | orchestrator | Tuesday 31 March 2026 03:26:46 +0000 (0:00:00.320) 0:00:06.119 ********* 2026-03-31 03:26:50.258186 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:26:50.258194 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:26:50.258201 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:26:50.258209 | orchestrator | 2026-03-31 03:26:50.258217 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-31 03:26:50.258225 | orchestrator | Tuesday 31 March 2026 03:26:46 +0000 (0:00:00.548) 0:00:06.668 ********* 2026-03-31 03:26:50.258233 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:26:50.258240 | orchestrator | 2026-03-31 03:26:50.258248 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-31 03:26:50.258256 | orchestrator | Tuesday 31 March 2026 03:26:46 +0000 (0:00:00.141) 0:00:06.809 ********* 2026-03-31 03:26:50.258263 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:26:50.258274 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:26:50.258287 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:26:50.258301 | orchestrator | 2026-03-31 03:26:50.258314 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-31 03:26:50.258327 | orchestrator | Tuesday 31 March 2026 03:26:47 +0000 (0:00:00.329) 0:00:07.139 ********* 2026-03-31 03:26:50.258350 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:26:50.258365 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:26:50.258378 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:26:50.258386 | orchestrator | 2026-03-31 03:26:50.258394 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-31 03:26:50.258402 | orchestrator | Tuesday 31 March 2026 03:26:47 +0000 (0:00:00.443) 0:00:07.583 ********* 2026-03-31 03:26:50.258410 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:26:50.258418 | orchestrator | 2026-03-31 03:26:50.258425 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-31 03:26:50.258439 | orchestrator | Tuesday 31 March 2026 03:26:47 +0000 (0:00:00.128) 0:00:07.712 ********* 2026-03-31 03:26:50.258447 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:26:50.258455 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:26:50.258462 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:26:50.258470 | orchestrator | 2026-03-31 03:26:50.258478 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-31 03:26:50.258486 | orchestrator | Tuesday 31 March 2026 03:26:48 +0000 (0:00:00.550) 0:00:08.263 ********* 2026-03-31 03:26:50.258493 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:26:50.258501 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:26:50.258509 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:26:50.258517 | orchestrator | 2026-03-31 03:26:50.258524 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-31 03:26:50.258532 | orchestrator | Tuesday 31 March 2026 03:26:48 +0000 (0:00:00.316) 0:00:08.579 ********* 2026-03-31 03:26:50.258540 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:26:50.258554 | orchestrator | 2026-03-31 03:26:50.258567 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-31 03:26:50.258581 | orchestrator | Tuesday 31 March 2026 03:26:48 +0000 (0:00:00.144) 0:00:08.724 ********* 2026-03-31 03:26:50.258590 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:26:50.258597 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:26:50.258605 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:26:50.258617 | orchestrator | 2026-03-31 03:26:50.258630 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-31 03:26:50.258644 | orchestrator | Tuesday 31 March 2026 03:26:49 +0000 (0:00:00.356) 0:00:09.081 ********* 2026-03-31 03:26:50.258657 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:26:50.258671 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:26:50.258679 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:26:50.258686 | orchestrator | 2026-03-31 03:26:50.258694 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-31 03:26:50.258702 | orchestrator | Tuesday 31 March 2026 03:26:49 +0000 (0:00:00.336) 0:00:09.418 ********* 2026-03-31 03:26:50.258710 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:26:50.258717 | orchestrator | 2026-03-31 03:26:50.258725 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-31 03:26:50.258733 | orchestrator | Tuesday 31 March 2026 03:26:49 +0000 (0:00:00.342) 0:00:09.760 ********* 2026-03-31 03:26:50.258740 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:26:50.258748 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:26:50.258756 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:26:50.258764 | orchestrator | 2026-03-31 03:26:50.258787 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-31 03:26:50.258805 | orchestrator | Tuesday 31 March 2026 03:26:50 +0000 (0:00:00.355) 0:00:10.116 ********* 2026-03-31 03:27:04.584174 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:27:04.584278 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:27:04.584292 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:27:04.584303 | orchestrator | 2026-03-31 03:27:04.584314 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-31 03:27:04.584325 | orchestrator | Tuesday 31 March 2026 03:26:50 +0000 (0:00:00.322) 0:00:10.439 ********* 2026-03-31 03:27:04.584335 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:27:04.584368 | orchestrator | 2026-03-31 03:27:04.584379 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-31 03:27:04.584388 | orchestrator | Tuesday 31 March 2026 03:26:50 +0000 (0:00:00.137) 0:00:10.577 ********* 2026-03-31 03:27:04.584398 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:27:04.584409 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:27:04.584418 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:27:04.584428 | orchestrator | 2026-03-31 03:27:04.584438 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-31 03:27:04.584447 | orchestrator | Tuesday 31 March 2026 03:26:51 +0000 (0:00:00.346) 0:00:10.923 ********* 2026-03-31 03:27:04.584457 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:27:04.584467 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:27:04.584476 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:27:04.584486 | orchestrator | 2026-03-31 03:27:04.584495 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-31 03:27:04.584505 | orchestrator | Tuesday 31 March 2026 03:26:51 +0000 (0:00:00.603) 0:00:11.527 ********* 2026-03-31 03:27:04.584515 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:27:04.584525 | orchestrator | 2026-03-31 03:27:04.584534 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-31 03:27:04.584544 | orchestrator | Tuesday 31 March 2026 03:26:51 +0000 (0:00:00.141) 0:00:11.668 ********* 2026-03-31 03:27:04.584554 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:27:04.584563 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:27:04.584573 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:27:04.584583 | orchestrator | 2026-03-31 03:27:04.584592 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-31 03:27:04.584602 | orchestrator | Tuesday 31 March 2026 03:26:52 +0000 (0:00:00.301) 0:00:11.969 ********* 2026-03-31 03:27:04.584612 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:27:04.584621 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:27:04.584631 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:27:04.584641 | orchestrator | 2026-03-31 03:27:04.584650 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-31 03:27:04.584660 | orchestrator | Tuesday 31 March 2026 03:26:52 +0000 (0:00:00.332) 0:00:12.302 ********* 2026-03-31 03:27:04.584670 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:27:04.584680 | orchestrator | 2026-03-31 03:27:04.584689 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-31 03:27:04.584699 | orchestrator | Tuesday 31 March 2026 03:26:52 +0000 (0:00:00.128) 0:00:12.430 ********* 2026-03-31 03:27:04.584711 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:27:04.584722 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:27:04.584733 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:27:04.584767 | orchestrator | 2026-03-31 03:27:04.584779 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-31 03:27:04.584790 | orchestrator | Tuesday 31 March 2026 03:26:53 +0000 (0:00:00.530) 0:00:12.961 ********* 2026-03-31 03:27:04.584800 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:27:04.584813 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:27:04.584839 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:27:04.584851 | orchestrator | 2026-03-31 03:27:04.584862 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-31 03:27:04.584873 | orchestrator | Tuesday 31 March 2026 03:26:53 +0000 (0:00:00.379) 0:00:13.340 ********* 2026-03-31 03:27:04.584884 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:27:04.584895 | orchestrator | 2026-03-31 03:27:04.584906 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-31 03:27:04.584917 | orchestrator | Tuesday 31 March 2026 03:26:53 +0000 (0:00:00.131) 0:00:13.471 ********* 2026-03-31 03:27:04.584929 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:27:04.584940 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:27:04.584951 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:27:04.584962 | orchestrator | 2026-03-31 03:27:04.584981 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-03-31 03:27:04.584994 | orchestrator | Tuesday 31 March 2026 03:26:53 +0000 (0:00:00.308) 0:00:13.780 ********* 2026-03-31 03:27:04.585005 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:27:04.585016 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:27:04.585027 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:27:04.585038 | orchestrator | 2026-03-31 03:27:04.585049 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-03-31 03:27:04.585061 | orchestrator | Tuesday 31 March 2026 03:26:55 +0000 (0:00:01.937) 0:00:15.718 ********* 2026-03-31 03:27:04.585072 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-31 03:27:04.585083 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-31 03:27:04.585093 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-31 03:27:04.585102 | orchestrator | 2026-03-31 03:27:04.585112 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-03-31 03:27:04.585121 | orchestrator | Tuesday 31 March 2026 03:26:57 +0000 (0:00:01.885) 0:00:17.603 ********* 2026-03-31 03:27:04.585131 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-31 03:27:04.585144 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-31 03:27:04.585160 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-31 03:27:04.585176 | orchestrator | 2026-03-31 03:27:04.585193 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-03-31 03:27:04.585227 | orchestrator | Tuesday 31 March 2026 03:26:59 +0000 (0:00:01.813) 0:00:19.416 ********* 2026-03-31 03:27:04.585242 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-31 03:27:04.585259 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-31 03:27:04.585274 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-31 03:27:04.585289 | orchestrator | 2026-03-31 03:27:04.585306 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-03-31 03:27:04.585322 | orchestrator | Tuesday 31 March 2026 03:27:01 +0000 (0:00:01.539) 0:00:20.955 ********* 2026-03-31 03:27:04.585339 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:27:04.585351 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:27:04.585360 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:27:04.585370 | orchestrator | 2026-03-31 03:27:04.585379 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-03-31 03:27:04.585389 | orchestrator | Tuesday 31 March 2026 03:27:01 +0000 (0:00:00.549) 0:00:21.505 ********* 2026-03-31 03:27:04.585398 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:27:04.585408 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:27:04.585424 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:27:04.585439 | orchestrator | 2026-03-31 03:27:04.585468 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-31 03:27:04.585483 | orchestrator | Tuesday 31 March 2026 03:27:01 +0000 (0:00:00.302) 0:00:21.807 ********* 2026-03-31 03:27:04.585498 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 03:27:04.585514 | orchestrator | 2026-03-31 03:27:04.585528 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-03-31 03:27:04.585543 | orchestrator | Tuesday 31 March 2026 03:27:02 +0000 (0:00:00.658) 0:00:22.466 ********* 2026-03-31 03:27:04.585578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-31 03:27:04.585887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-31 03:27:05.267910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-31 03:27:05.268017 | orchestrator | 2026-03-31 03:27:05.268034 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-03-31 03:27:05.268047 | orchestrator | Tuesday 31 March 2026 03:27:04 +0000 (0:00:01.970) 0:00:24.437 ********* 2026-03-31 03:27:05.268134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-31 03:27:05.268176 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:27:05.268198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-31 03:27:05.268211 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:27:05.268239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-31 03:27:07.778479 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:27:07.778582 | orchestrator | 2026-03-31 03:27:07.778598 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-03-31 03:27:07.778612 | orchestrator | Tuesday 31 March 2026 03:27:05 +0000 (0:00:00.687) 0:00:25.124 ********* 2026-03-31 03:27:07.778628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-31 03:27:07.778644 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:27:07.778676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-31 03:27:07.778716 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:27:07.778816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-31 03:27:07.778843 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:27:07.778855 | orchestrator | 2026-03-31 03:27:07.778876 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-03-31 03:27:07.778894 | orchestrator | Tuesday 31 March 2026 03:27:06 +0000 (0:00:00.894) 0:00:26.019 ********* 2026-03-31 03:27:07.778942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-31 03:27:54.236177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-31 03:27:54.236338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-31 03:27:54.236355 | orchestrator | 2026-03-31 03:27:54.236366 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-31 03:27:54.236376 | orchestrator | Tuesday 31 March 2026 03:27:07 +0000 (0:00:01.615) 0:00:27.634 ********* 2026-03-31 03:27:54.236385 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:27:54.236395 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:27:54.236403 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:27:54.236412 | orchestrator | 2026-03-31 03:27:54.236421 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-31 03:27:54.236429 | orchestrator | Tuesday 31 March 2026 03:27:08 +0000 (0:00:00.323) 0:00:27.958 ********* 2026-03-31 03:27:54.236438 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 03:27:54.236447 | orchestrator | 2026-03-31 03:27:54.236455 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-03-31 03:27:54.236464 | orchestrator | Tuesday 31 March 2026 03:27:08 +0000 (0:00:00.563) 0:00:28.522 ********* 2026-03-31 03:27:54.236472 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:27:54.236481 | orchestrator | 2026-03-31 03:27:54.236489 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-03-31 03:27:54.236505 | orchestrator | Tuesday 31 March 2026 03:27:10 +0000 (0:00:02.105) 0:00:30.628 ********* 2026-03-31 03:27:54.236514 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:27:54.236523 | orchestrator | 2026-03-31 03:27:54.236531 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-03-31 03:27:54.236539 | orchestrator | Tuesday 31 March 2026 03:27:13 +0000 (0:00:02.538) 0:00:33.166 ********* 2026-03-31 03:27:54.236548 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:27:54.236556 | orchestrator | 2026-03-31 03:27:54.236565 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-31 03:27:54.236573 | orchestrator | Tuesday 31 March 2026 03:27:28 +0000 (0:00:14.740) 0:00:47.906 ********* 2026-03-31 03:27:54.236581 | orchestrator | 2026-03-31 03:27:54.236590 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-31 03:27:54.236598 | orchestrator | Tuesday 31 March 2026 03:27:28 +0000 (0:00:00.071) 0:00:47.978 ********* 2026-03-31 03:27:54.236606 | orchestrator | 2026-03-31 03:27:54.236615 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-31 03:27:54.236623 | orchestrator | Tuesday 31 March 2026 03:27:28 +0000 (0:00:00.075) 0:00:48.053 ********* 2026-03-31 03:27:54.236632 | orchestrator | 2026-03-31 03:27:54.236640 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-03-31 03:27:54.236649 | orchestrator | Tuesday 31 March 2026 03:27:28 +0000 (0:00:00.074) 0:00:48.128 ********* 2026-03-31 03:27:54.236698 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:27:54.236708 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:27:54.236718 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:27:54.236727 | orchestrator | 2026-03-31 03:27:54.236737 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 03:27:54.236748 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-31 03:27:54.236760 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-31 03:27:54.236770 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-31 03:27:54.236780 | orchestrator | 2026-03-31 03:27:54.236789 | orchestrator | 2026-03-31 03:27:54.236798 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 03:27:54.236808 | orchestrator | Tuesday 31 March 2026 03:27:54 +0000 (0:00:25.941) 0:01:14.070 ********* 2026-03-31 03:27:54.236818 | orchestrator | =============================================================================== 2026-03-31 03:27:54.236832 | orchestrator | horizon : Restart horizon container ------------------------------------ 25.94s 2026-03-31 03:27:54.236843 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 14.74s 2026-03-31 03:27:54.236852 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.54s 2026-03-31 03:27:54.236862 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.11s 2026-03-31 03:27:54.236872 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.97s 2026-03-31 03:27:54.236881 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.94s 2026-03-31 03:27:54.236891 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.89s 2026-03-31 03:27:54.236901 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.81s 2026-03-31 03:27:54.236910 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.62s 2026-03-31 03:27:54.236921 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.54s 2026-03-31 03:27:54.236931 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.26s 2026-03-31 03:27:54.236940 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.89s 2026-03-31 03:27:54.236955 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.83s 2026-03-31 03:27:54.236971 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.69s 2026-03-31 03:27:54.672752 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.66s 2026-03-31 03:27:54.672837 | orchestrator | horizon : Update policy file name --------------------------------------- 0.60s 2026-03-31 03:27:54.672847 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.58s 2026-03-31 03:27:54.672855 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.56s 2026-03-31 03:27:54.672863 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.55s 2026-03-31 03:27:54.672870 | orchestrator | horizon : Copying over existing policy file ----------------------------- 0.55s 2026-03-31 03:27:57.121536 | orchestrator | 2026-03-31 03:27:57 | INFO  | Task 4455b203-c10f-454a-852d-7aebcee40d17 (skyline) was prepared for execution. 2026-03-31 03:27:57.121607 | orchestrator | 2026-03-31 03:27:57 | INFO  | It takes a moment until task 4455b203-c10f-454a-852d-7aebcee40d17 (skyline) has been started and output is visible here. 2026-03-31 03:28:26.960920 | orchestrator | 2026-03-31 03:28:26.961061 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-31 03:28:26.961086 | orchestrator | 2026-03-31 03:28:26.961104 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-31 03:28:26.961116 | orchestrator | Tuesday 31 March 2026 03:28:01 +0000 (0:00:00.273) 0:00:00.273 ********* 2026-03-31 03:28:26.961127 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:28:26.961140 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:28:26.961151 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:28:26.961161 | orchestrator | 2026-03-31 03:28:26.961172 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-31 03:28:26.961183 | orchestrator | Tuesday 31 March 2026 03:28:01 +0000 (0:00:00.319) 0:00:00.592 ********* 2026-03-31 03:28:26.961194 | orchestrator | ok: [testbed-node-0] => (item=enable_skyline_True) 2026-03-31 03:28:26.961205 | orchestrator | ok: [testbed-node-1] => (item=enable_skyline_True) 2026-03-31 03:28:26.961216 | orchestrator | ok: [testbed-node-2] => (item=enable_skyline_True) 2026-03-31 03:28:26.961226 | orchestrator | 2026-03-31 03:28:26.961237 | orchestrator | PLAY [Apply role skyline] ****************************************************** 2026-03-31 03:28:26.961248 | orchestrator | 2026-03-31 03:28:26.961258 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-03-31 03:28:26.961269 | orchestrator | Tuesday 31 March 2026 03:28:02 +0000 (0:00:00.459) 0:00:01.052 ********* 2026-03-31 03:28:26.961281 | orchestrator | included: /ansible/roles/skyline/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 03:28:26.961292 | orchestrator | 2026-03-31 03:28:26.961303 | orchestrator | TASK [service-ks-register : skyline | Creating services] *********************** 2026-03-31 03:28:26.961313 | orchestrator | Tuesday 31 March 2026 03:28:02 +0000 (0:00:00.594) 0:00:01.647 ********* 2026-03-31 03:28:26.961324 | orchestrator | changed: [testbed-node-0] => (item=skyline (panel)) 2026-03-31 03:28:26.961335 | orchestrator | 2026-03-31 03:28:26.961346 | orchestrator | TASK [service-ks-register : skyline | Creating endpoints] ********************** 2026-03-31 03:28:26.961357 | orchestrator | Tuesday 31 March 2026 03:28:05 +0000 (0:00:03.124) 0:00:04.772 ********* 2026-03-31 03:28:26.961368 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api-int.testbed.osism.xyz:9998 -> internal) 2026-03-31 03:28:26.961379 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api.testbed.osism.xyz:9998 -> public) 2026-03-31 03:28:26.961390 | orchestrator | 2026-03-31 03:28:26.961401 | orchestrator | TASK [service-ks-register : skyline | Creating projects] *********************** 2026-03-31 03:28:26.961412 | orchestrator | Tuesday 31 March 2026 03:28:12 +0000 (0:00:06.100) 0:00:10.872 ********* 2026-03-31 03:28:26.961423 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-31 03:28:26.961466 | orchestrator | 2026-03-31 03:28:26.961479 | orchestrator | TASK [service-ks-register : skyline | Creating users] ************************** 2026-03-31 03:28:26.961492 | orchestrator | Tuesday 31 March 2026 03:28:15 +0000 (0:00:03.082) 0:00:13.955 ********* 2026-03-31 03:28:26.961505 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-31 03:28:26.961517 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service) 2026-03-31 03:28:26.961530 | orchestrator | 2026-03-31 03:28:26.961557 | orchestrator | TASK [service-ks-register : skyline | Creating roles] ************************** 2026-03-31 03:28:26.961570 | orchestrator | Tuesday 31 March 2026 03:28:18 +0000 (0:00:03.827) 0:00:17.782 ********* 2026-03-31 03:28:26.961582 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-31 03:28:26.961594 | orchestrator | 2026-03-31 03:28:26.961634 | orchestrator | TASK [service-ks-register : skyline | Granting user roles] ********************* 2026-03-31 03:28:26.961647 | orchestrator | Tuesday 31 March 2026 03:28:21 +0000 (0:00:02.944) 0:00:20.726 ********* 2026-03-31 03:28:26.961658 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service -> admin) 2026-03-31 03:28:26.961670 | orchestrator | 2026-03-31 03:28:26.961683 | orchestrator | TASK [skyline : Ensuring config directories exist] ***************************** 2026-03-31 03:28:26.961695 | orchestrator | Tuesday 31 March 2026 03:28:25 +0000 (0:00:03.649) 0:00:24.376 ********* 2026-03-31 03:28:26.961711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-31 03:28:26.961752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-31 03:28:26.961768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-31 03:28:26.961796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-31 03:28:26.961809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-31 03:28:26.961830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-31 03:28:30.870688 | orchestrator | 2026-03-31 03:28:30.870788 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-03-31 03:28:30.870803 | orchestrator | Tuesday 31 March 2026 03:28:26 +0000 (0:00:01.375) 0:00:25.752 ********* 2026-03-31 03:28:30.870813 | orchestrator | included: /ansible/roles/skyline/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 03:28:30.870823 | orchestrator | 2026-03-31 03:28:30.870832 | orchestrator | TASK [service-cert-copy : skyline | Copying over extra CA certificates] ******** 2026-03-31 03:28:30.870841 | orchestrator | Tuesday 31 March 2026 03:28:27 +0000 (0:00:00.765) 0:00:26.517 ********* 2026-03-31 03:28:30.870852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-31 03:28:30.870908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-31 03:28:30.870926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-31 03:28:30.870961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-31 03:28:30.870977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-31 03:28:30.871001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-31 03:28:30.871015 | orchestrator | 2026-03-31 03:28:30.871068 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS certificate] *** 2026-03-31 03:28:30.871087 | orchestrator | Tuesday 31 March 2026 03:28:30 +0000 (0:00:02.544) 0:00:29.062 ********* 2026-03-31 03:28:30.871103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-31 03:28:30.871118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-31 03:28:30.871133 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:28:30.871161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-31 03:28:32.192273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-31 03:28:32.192383 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:28:32.192421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-31 03:28:32.192436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-31 03:28:32.192449 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:28:32.192463 | orchestrator | 2026-03-31 03:28:32.192478 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS key] ***** 2026-03-31 03:28:32.192493 | orchestrator | Tuesday 31 March 2026 03:28:30 +0000 (0:00:00.609) 0:00:29.671 ********* 2026-03-31 03:28:32.192507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-31 03:28:32.192564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-31 03:28:32.192577 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:28:32.192681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-31 03:28:32.192700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-31 03:28:32.192714 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:28:32.192727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-31 03:28:32.192763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-31 03:28:40.800488 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:28:40.800624 | orchestrator | 2026-03-31 03:28:40.800649 | orchestrator | TASK [skyline : Copying over skyline.yaml files for services] ****************** 2026-03-31 03:28:40.800666 | orchestrator | Tuesday 31 March 2026 03:28:32 +0000 (0:00:01.318) 0:00:30.990 ********* 2026-03-31 03:28:40.800704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-31 03:28:40.800725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-31 03:28:40.800740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-31 03:28:40.800772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-31 03:28:40.800807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-31 03:28:40.800818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-31 03:28:40.800827 | orchestrator | 2026-03-31 03:28:40.800836 | orchestrator | TASK [skyline : Copying over gunicorn.py files for services] ******************* 2026-03-31 03:28:40.800845 | orchestrator | Tuesday 31 March 2026 03:28:34 +0000 (0:00:02.485) 0:00:33.475 ********* 2026-03-31 03:28:40.800854 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-03-31 03:28:40.800863 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-03-31 03:28:40.800871 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-03-31 03:28:40.800886 | orchestrator | 2026-03-31 03:28:40.800895 | orchestrator | TASK [skyline : Copying over nginx.conf files for services] ******************** 2026-03-31 03:28:40.800903 | orchestrator | Tuesday 31 March 2026 03:28:36 +0000 (0:00:01.611) 0:00:35.087 ********* 2026-03-31 03:28:40.800912 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-03-31 03:28:40.800921 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-03-31 03:28:40.800929 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-03-31 03:28:40.800938 | orchestrator | 2026-03-31 03:28:40.800946 | orchestrator | TASK [skyline : Copying over config.json files for services] ******************* 2026-03-31 03:28:40.800955 | orchestrator | Tuesday 31 March 2026 03:28:38 +0000 (0:00:02.100) 0:00:37.188 ********* 2026-03-31 03:28:40.800964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-31 03:28:40.800985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-31 03:28:42.955957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-31 03:28:42.956040 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-31 03:28:42.956069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-31 03:28:42.956076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-31 03:28:42.956083 | orchestrator | 2026-03-31 03:28:42.956091 | orchestrator | TASK [skyline : Copying over custom logos] ************************************* 2026-03-31 03:28:42.956099 | orchestrator | Tuesday 31 March 2026 03:28:40 +0000 (0:00:02.410) 0:00:39.598 ********* 2026-03-31 03:28:42.956105 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:28:42.956113 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:28:42.956131 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:28:42.956137 | orchestrator | 2026-03-31 03:28:42.956155 | orchestrator | TASK [skyline : Check skyline container] *************************************** 2026-03-31 03:28:42.956162 | orchestrator | Tuesday 31 March 2026 03:28:41 +0000 (0:00:00.324) 0:00:39.923 ********* 2026-03-31 03:28:42.956168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-31 03:28:42.956180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-31 03:28:42.956187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-31 03:28:42.956194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-31 03:28:42.956210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-31 03:29:16.275872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-31 03:29:16.275988 | orchestrator | 2026-03-31 03:29:16.276013 | orchestrator | TASK [skyline : Creating Skyline database] ************************************* 2026-03-31 03:29:16.276038 | orchestrator | Tuesday 31 March 2026 03:28:42 +0000 (0:00:01.830) 0:00:41.754 ********* 2026-03-31 03:29:16.276068 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:29:16.276089 | orchestrator | 2026-03-31 03:29:16.276108 | orchestrator | TASK [skyline : Creating Skyline database user and setting permissions] ******** 2026-03-31 03:29:16.276127 | orchestrator | Tuesday 31 March 2026 03:28:44 +0000 (0:00:02.024) 0:00:43.778 ********* 2026-03-31 03:29:16.276146 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:29:16.276165 | orchestrator | 2026-03-31 03:29:16.276186 | orchestrator | TASK [skyline : Running Skyline bootstrap container] *************************** 2026-03-31 03:29:16.276206 | orchestrator | Tuesday 31 March 2026 03:28:47 +0000 (0:00:02.145) 0:00:45.924 ********* 2026-03-31 03:29:16.276227 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:29:16.276245 | orchestrator | 2026-03-31 03:29:16.276264 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-03-31 03:29:16.276285 | orchestrator | Tuesday 31 March 2026 03:28:54 +0000 (0:00:07.677) 0:00:53.601 ********* 2026-03-31 03:29:16.276305 | orchestrator | 2026-03-31 03:29:16.276323 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-03-31 03:29:16.276342 | orchestrator | Tuesday 31 March 2026 03:28:54 +0000 (0:00:00.068) 0:00:53.670 ********* 2026-03-31 03:29:16.276362 | orchestrator | 2026-03-31 03:29:16.276384 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-03-31 03:29:16.276406 | orchestrator | Tuesday 31 March 2026 03:28:54 +0000 (0:00:00.070) 0:00:53.741 ********* 2026-03-31 03:29:16.276427 | orchestrator | 2026-03-31 03:29:16.276444 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-apiserver container] **************** 2026-03-31 03:29:16.276456 | orchestrator | Tuesday 31 March 2026 03:28:55 +0000 (0:00:00.071) 0:00:53.812 ********* 2026-03-31 03:29:16.276469 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:29:16.276481 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:29:16.276494 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:29:16.276506 | orchestrator | 2026-03-31 03:29:16.276553 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-console container] ****************** 2026-03-31 03:29:16.276570 | orchestrator | Tuesday 31 March 2026 03:29:06 +0000 (0:00:11.522) 0:01:05.335 ********* 2026-03-31 03:29:16.276582 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:29:16.276595 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:29:16.276607 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:29:16.276620 | orchestrator | 2026-03-31 03:29:16.276632 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 03:29:16.276646 | orchestrator | testbed-node-0 : ok=22  changed=16  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-31 03:29:16.276661 | orchestrator | testbed-node-1 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-31 03:29:16.276701 | orchestrator | testbed-node-2 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-31 03:29:16.276715 | orchestrator | 2026-03-31 03:29:16.276727 | orchestrator | 2026-03-31 03:29:16.276739 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 03:29:16.276753 | orchestrator | Tuesday 31 March 2026 03:29:15 +0000 (0:00:09.309) 0:01:14.644 ********* 2026-03-31 03:29:16.276778 | orchestrator | =============================================================================== 2026-03-31 03:29:16.276789 | orchestrator | skyline : Restart skyline-apiserver container -------------------------- 11.52s 2026-03-31 03:29:16.276800 | orchestrator | skyline : Restart skyline-console container ----------------------------- 9.31s 2026-03-31 03:29:16.276811 | orchestrator | skyline : Running Skyline bootstrap container --------------------------- 7.68s 2026-03-31 03:29:16.276822 | orchestrator | service-ks-register : skyline | Creating endpoints ---------------------- 6.10s 2026-03-31 03:29:16.276832 | orchestrator | service-ks-register : skyline | Creating users -------------------------- 3.83s 2026-03-31 03:29:16.276843 | orchestrator | service-ks-register : skyline | Granting user roles --------------------- 3.65s 2026-03-31 03:29:16.276854 | orchestrator | service-ks-register : skyline | Creating services ----------------------- 3.12s 2026-03-31 03:29:16.276865 | orchestrator | service-ks-register : skyline | Creating projects ----------------------- 3.08s 2026-03-31 03:29:16.276898 | orchestrator | service-ks-register : skyline | Creating roles -------------------------- 2.94s 2026-03-31 03:29:16.276909 | orchestrator | service-cert-copy : skyline | Copying over extra CA certificates -------- 2.54s 2026-03-31 03:29:16.276920 | orchestrator | skyline : Copying over skyline.yaml files for services ------------------ 2.49s 2026-03-31 03:29:16.276931 | orchestrator | skyline : Copying over config.json files for services ------------------- 2.41s 2026-03-31 03:29:16.276941 | orchestrator | skyline : Creating Skyline database user and setting permissions -------- 2.15s 2026-03-31 03:29:16.276952 | orchestrator | skyline : Copying over nginx.conf files for services -------------------- 2.10s 2026-03-31 03:29:16.276962 | orchestrator | skyline : Creating Skyline database ------------------------------------- 2.02s 2026-03-31 03:29:16.276973 | orchestrator | skyline : Check skyline container --------------------------------------- 1.83s 2026-03-31 03:29:16.276983 | orchestrator | skyline : Copying over gunicorn.py files for services ------------------- 1.61s 2026-03-31 03:29:16.276994 | orchestrator | skyline : Ensuring config directories exist ----------------------------- 1.38s 2026-03-31 03:29:16.277005 | orchestrator | service-cert-copy : skyline | Copying over backend internal TLS key ----- 1.32s 2026-03-31 03:29:16.277016 | orchestrator | skyline : include_tasks ------------------------------------------------- 0.77s 2026-03-31 03:29:18.754720 | orchestrator | 2026-03-31 03:29:18 | INFO  | Task 8fa8b9b1-100a-43a5-9f3e-15666d2133f3 (glance) was prepared for execution. 2026-03-31 03:29:18.756407 | orchestrator | 2026-03-31 03:29:18 | INFO  | It takes a moment until task 8fa8b9b1-100a-43a5-9f3e-15666d2133f3 (glance) has been started and output is visible here. 2026-03-31 03:29:51.133948 | orchestrator | 2026-03-31 03:29:51.134141 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-31 03:29:51.134172 | orchestrator | 2026-03-31 03:29:51.134189 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-31 03:29:51.134206 | orchestrator | Tuesday 31 March 2026 03:29:23 +0000 (0:00:00.277) 0:00:00.277 ********* 2026-03-31 03:29:51.134224 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:29:51.134241 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:29:51.134258 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:29:51.134274 | orchestrator | 2026-03-31 03:29:51.134289 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-31 03:29:51.134305 | orchestrator | Tuesday 31 March 2026 03:29:23 +0000 (0:00:00.332) 0:00:00.609 ********* 2026-03-31 03:29:51.134321 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-03-31 03:29:51.134337 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-03-31 03:29:51.134383 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-03-31 03:29:51.134398 | orchestrator | 2026-03-31 03:29:51.134416 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-03-31 03:29:51.134432 | orchestrator | 2026-03-31 03:29:51.134449 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-31 03:29:51.134465 | orchestrator | Tuesday 31 March 2026 03:29:23 +0000 (0:00:00.453) 0:00:01.063 ********* 2026-03-31 03:29:51.134512 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 03:29:51.134532 | orchestrator | 2026-03-31 03:29:51.134543 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-03-31 03:29:51.134555 | orchestrator | Tuesday 31 March 2026 03:29:24 +0000 (0:00:00.587) 0:00:01.650 ********* 2026-03-31 03:29:51.134566 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-03-31 03:29:51.134577 | orchestrator | 2026-03-31 03:29:51.134588 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-03-31 03:29:51.134599 | orchestrator | Tuesday 31 March 2026 03:29:27 +0000 (0:00:03.234) 0:00:04.885 ********* 2026-03-31 03:29:51.134611 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-03-31 03:29:51.134622 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-03-31 03:29:51.134634 | orchestrator | 2026-03-31 03:29:51.134645 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-03-31 03:29:51.134656 | orchestrator | Tuesday 31 March 2026 03:29:33 +0000 (0:00:06.029) 0:00:10.915 ********* 2026-03-31 03:29:51.134667 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-31 03:29:51.134678 | orchestrator | 2026-03-31 03:29:51.134690 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-03-31 03:29:51.134700 | orchestrator | Tuesday 31 March 2026 03:29:36 +0000 (0:00:03.098) 0:00:14.013 ********* 2026-03-31 03:29:51.134712 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-31 03:29:51.134738 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-03-31 03:29:51.134750 | orchestrator | 2026-03-31 03:29:51.134759 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-03-31 03:29:51.134769 | orchestrator | Tuesday 31 March 2026 03:29:40 +0000 (0:00:03.785) 0:00:17.798 ********* 2026-03-31 03:29:51.134779 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-31 03:29:51.134788 | orchestrator | 2026-03-31 03:29:51.134798 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-03-31 03:29:51.134807 | orchestrator | Tuesday 31 March 2026 03:29:43 +0000 (0:00:02.864) 0:00:20.662 ********* 2026-03-31 03:29:51.134817 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-03-31 03:29:51.134827 | orchestrator | 2026-03-31 03:29:51.134836 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-03-31 03:29:51.134846 | orchestrator | Tuesday 31 March 2026 03:29:46 +0000 (0:00:03.361) 0:00:24.024 ********* 2026-03-31 03:29:51.134886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-31 03:29:51.134912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-31 03:29:51.134929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-31 03:29:51.134947 | orchestrator | 2026-03-31 03:29:51.134957 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-31 03:29:51.134966 | orchestrator | Tuesday 31 March 2026 03:29:50 +0000 (0:00:03.523) 0:00:27.547 ********* 2026-03-31 03:29:51.134977 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 03:29:51.134987 | orchestrator | 2026-03-31 03:29:51.135004 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-03-31 03:30:06.998382 | orchestrator | Tuesday 31 March 2026 03:29:51 +0000 (0:00:00.764) 0:00:28.312 ********* 2026-03-31 03:30:06.998557 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:30:06.998579 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:30:06.998590 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:30:06.998603 | orchestrator | 2026-03-31 03:30:06.998616 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-03-31 03:30:06.998627 | orchestrator | Tuesday 31 March 2026 03:29:54 +0000 (0:00:03.611) 0:00:31.924 ********* 2026-03-31 03:30:06.998640 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-31 03:30:06.998654 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-31 03:30:06.998666 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-31 03:30:06.998678 | orchestrator | 2026-03-31 03:30:06.998689 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-03-31 03:30:06.998701 | orchestrator | Tuesday 31 March 2026 03:29:56 +0000 (0:00:01.559) 0:00:33.483 ********* 2026-03-31 03:30:06.998713 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-31 03:30:06.998725 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-31 03:30:06.998738 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-31 03:30:06.998750 | orchestrator | 2026-03-31 03:30:06.998762 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-03-31 03:30:06.998774 | orchestrator | Tuesday 31 March 2026 03:29:57 +0000 (0:00:01.360) 0:00:34.844 ********* 2026-03-31 03:30:06.998787 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:30:06.998801 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:30:06.998813 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:30:06.998826 | orchestrator | 2026-03-31 03:30:06.998839 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-03-31 03:30:06.998852 | orchestrator | Tuesday 31 March 2026 03:29:58 +0000 (0:00:00.677) 0:00:35.521 ********* 2026-03-31 03:30:06.998864 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:30:06.998876 | orchestrator | 2026-03-31 03:30:06.998889 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-03-31 03:30:06.998902 | orchestrator | Tuesday 31 March 2026 03:29:58 +0000 (0:00:00.159) 0:00:35.681 ********* 2026-03-31 03:30:06.998914 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:30:06.998925 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:30:06.998934 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:30:06.998944 | orchestrator | 2026-03-31 03:30:06.998958 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-31 03:30:06.998990 | orchestrator | Tuesday 31 March 2026 03:29:58 +0000 (0:00:00.314) 0:00:35.996 ********* 2026-03-31 03:30:06.999004 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 03:30:06.999017 | orchestrator | 2026-03-31 03:30:06.999028 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-03-31 03:30:06.999066 | orchestrator | Tuesday 31 March 2026 03:29:59 +0000 (0:00:00.791) 0:00:36.787 ********* 2026-03-31 03:30:06.999088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-31 03:30:06.999130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-31 03:30:06.999154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-31 03:30:06.999178 | orchestrator | 2026-03-31 03:30:06.999192 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-03-31 03:30:06.999205 | orchestrator | Tuesday 31 March 2026 03:30:03 +0000 (0:00:04.001) 0:00:40.788 ********* 2026-03-31 03:30:06.999228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-31 03:30:11.092642 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:30:11.092746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-31 03:30:11.092778 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:30:11.092788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-31 03:30:11.092795 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:30:11.092801 | orchestrator | 2026-03-31 03:30:11.092808 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-03-31 03:30:11.092815 | orchestrator | Tuesday 31 March 2026 03:30:06 +0000 (0:00:03.388) 0:00:44.177 ********* 2026-03-31 03:30:11.092843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-31 03:30:11.092858 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:30:11.092864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-31 03:30:11.092872 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:30:11.092885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-31 03:30:46.783502 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:30:46.783640 | orchestrator | 2026-03-31 03:30:46.783667 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-03-31 03:30:46.783688 | orchestrator | Tuesday 31 March 2026 03:30:11 +0000 (0:00:04.092) 0:00:48.270 ********* 2026-03-31 03:30:46.783707 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:30:46.783725 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:30:46.783777 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:30:46.783797 | orchestrator | 2026-03-31 03:30:46.783815 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-03-31 03:30:46.783835 | orchestrator | Tuesday 31 March 2026 03:30:14 +0000 (0:00:03.634) 0:00:51.905 ********* 2026-03-31 03:30:46.783860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-31 03:30:46.783886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-31 03:30:46.784036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-31 03:30:46.784067 | orchestrator | 2026-03-31 03:30:46.784089 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-03-31 03:30:46.784110 | orchestrator | Tuesday 31 March 2026 03:30:18 +0000 (0:00:04.021) 0:00:55.926 ********* 2026-03-31 03:30:46.784130 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:30:46.784151 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:30:46.784172 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:30:46.784193 | orchestrator | 2026-03-31 03:30:46.784214 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-03-31 03:30:46.784235 | orchestrator | Tuesday 31 March 2026 03:30:24 +0000 (0:00:05.787) 0:01:01.713 ********* 2026-03-31 03:30:46.784256 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:30:46.784277 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:30:46.784298 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:30:46.784318 | orchestrator | 2026-03-31 03:30:46.784339 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-03-31 03:30:46.784357 | orchestrator | Tuesday 31 March 2026 03:30:28 +0000 (0:00:03.574) 0:01:05.288 ********* 2026-03-31 03:30:46.784377 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:30:46.784397 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:30:46.784550 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:30:46.784573 | orchestrator | 2026-03-31 03:30:46.784594 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-03-31 03:30:46.784614 | orchestrator | Tuesday 31 March 2026 03:30:31 +0000 (0:00:03.255) 0:01:08.543 ********* 2026-03-31 03:30:46.784633 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:30:46.784652 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:30:46.784671 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:30:46.784690 | orchestrator | 2026-03-31 03:30:46.784708 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-03-31 03:30:46.784727 | orchestrator | Tuesday 31 March 2026 03:30:34 +0000 (0:00:03.325) 0:01:11.869 ********* 2026-03-31 03:30:46.784764 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:30:46.784783 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:30:46.784801 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:30:46.784820 | orchestrator | 2026-03-31 03:30:46.784837 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-03-31 03:30:46.784854 | orchestrator | Tuesday 31 March 2026 03:30:38 +0000 (0:00:03.467) 0:01:15.336 ********* 2026-03-31 03:30:46.784873 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:30:46.784892 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:30:46.784910 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:30:46.784929 | orchestrator | 2026-03-31 03:30:46.784946 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-03-31 03:30:46.784964 | orchestrator | Tuesday 31 March 2026 03:30:38 +0000 (0:00:00.612) 0:01:15.948 ********* 2026-03-31 03:30:46.784981 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-31 03:30:46.784998 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:30:46.785008 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-31 03:30:46.785017 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:30:46.785027 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-31 03:30:46.785037 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:30:46.785046 | orchestrator | 2026-03-31 03:30:46.785056 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-03-31 03:30:46.785065 | orchestrator | Tuesday 31 March 2026 03:30:42 +0000 (0:00:03.389) 0:01:19.338 ********* 2026-03-31 03:30:46.785075 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:30:46.785084 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:30:46.785094 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:30:46.785103 | orchestrator | 2026-03-31 03:30:46.785113 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-03-31 03:30:46.785137 | orchestrator | Tuesday 31 March 2026 03:30:46 +0000 (0:00:04.615) 0:01:23.954 ********* 2026-03-31 03:31:58.032407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-31 03:31:58.032536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-31 03:31:58.032636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-31 03:31:58.032662 | orchestrator | 2026-03-31 03:31:58.032683 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-31 03:31:58.032703 | orchestrator | Tuesday 31 March 2026 03:30:50 +0000 (0:00:03.809) 0:01:27.763 ********* 2026-03-31 03:31:58.032723 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:31:58.032743 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:31:58.032760 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:31:58.032778 | orchestrator | 2026-03-31 03:31:58.032797 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-03-31 03:31:58.032814 | orchestrator | Tuesday 31 March 2026 03:30:51 +0000 (0:00:00.530) 0:01:28.294 ********* 2026-03-31 03:31:58.032833 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:31:58.032853 | orchestrator | 2026-03-31 03:31:58.032873 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-03-31 03:31:58.032907 | orchestrator | Tuesday 31 March 2026 03:30:53 +0000 (0:00:02.010) 0:01:30.304 ********* 2026-03-31 03:31:58.032924 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:31:58.032937 | orchestrator | 2026-03-31 03:31:58.032949 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-03-31 03:31:58.032962 | orchestrator | Tuesday 31 March 2026 03:30:55 +0000 (0:00:02.125) 0:01:32.430 ********* 2026-03-31 03:31:58.032974 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:31:58.032987 | orchestrator | 2026-03-31 03:31:58.032999 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-03-31 03:31:58.033012 | orchestrator | Tuesday 31 March 2026 03:30:57 +0000 (0:00:01.984) 0:01:34.415 ********* 2026-03-31 03:31:58.033024 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:31:58.033037 | orchestrator | 2026-03-31 03:31:58.033049 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-03-31 03:31:58.033062 | orchestrator | Tuesday 31 March 2026 03:31:23 +0000 (0:00:26.643) 0:02:01.059 ********* 2026-03-31 03:31:58.033074 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:31:58.033087 | orchestrator | 2026-03-31 03:31:58.033099 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-31 03:31:58.033112 | orchestrator | Tuesday 31 March 2026 03:31:25 +0000 (0:00:02.003) 0:02:03.062 ********* 2026-03-31 03:31:58.033124 | orchestrator | 2026-03-31 03:31:58.033137 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-31 03:31:58.033149 | orchestrator | Tuesday 31 March 2026 03:31:25 +0000 (0:00:00.070) 0:02:03.133 ********* 2026-03-31 03:31:58.033162 | orchestrator | 2026-03-31 03:31:58.033174 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-31 03:31:58.033186 | orchestrator | Tuesday 31 March 2026 03:31:26 +0000 (0:00:00.070) 0:02:03.203 ********* 2026-03-31 03:31:58.033198 | orchestrator | 2026-03-31 03:31:58.033211 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-03-31 03:31:58.033222 | orchestrator | Tuesday 31 March 2026 03:31:26 +0000 (0:00:00.071) 0:02:03.274 ********* 2026-03-31 03:31:58.033233 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:31:58.033244 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:31:58.033255 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:31:58.033265 | orchestrator | 2026-03-31 03:31:58.033276 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 03:31:58.033288 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-31 03:31:58.033301 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-31 03:31:58.033312 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-31 03:31:58.033323 | orchestrator | 2026-03-31 03:31:58.033333 | orchestrator | 2026-03-31 03:31:58.033344 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 03:31:58.033384 | orchestrator | Tuesday 31 March 2026 03:31:58 +0000 (0:00:31.925) 0:02:35.200 ********* 2026-03-31 03:31:58.033395 | orchestrator | =============================================================================== 2026-03-31 03:31:58.033405 | orchestrator | glance : Restart glance-api container ---------------------------------- 31.93s 2026-03-31 03:31:58.033416 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 26.64s 2026-03-31 03:31:58.033427 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.03s 2026-03-31 03:31:58.033448 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 5.79s 2026-03-31 03:31:58.416755 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.62s 2026-03-31 03:31:58.416875 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 4.09s 2026-03-31 03:31:58.416913 | orchestrator | glance : Copying over config.json files for services -------------------- 4.02s 2026-03-31 03:31:58.416925 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.00s 2026-03-31 03:31:58.416936 | orchestrator | glance : Check glance containers ---------------------------------------- 3.81s 2026-03-31 03:31:58.416947 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.79s 2026-03-31 03:31:58.416957 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.63s 2026-03-31 03:31:58.416968 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.61s 2026-03-31 03:31:58.416979 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 3.57s 2026-03-31 03:31:58.416990 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.52s 2026-03-31 03:31:58.417001 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.47s 2026-03-31 03:31:58.417011 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.39s 2026-03-31 03:31:58.417022 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.39s 2026-03-31 03:31:58.417033 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.36s 2026-03-31 03:31:58.417044 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.33s 2026-03-31 03:31:58.417055 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.26s 2026-03-31 03:32:00.842749 | orchestrator | 2026-03-31 03:32:00 | INFO  | Task 4015d9f2-0d1b-42e6-b290-bb26a9fc167d (cinder) was prepared for execution. 2026-03-31 03:32:00.842871 | orchestrator | 2026-03-31 03:32:00 | INFO  | It takes a moment until task 4015d9f2-0d1b-42e6-b290-bb26a9fc167d (cinder) has been started and output is visible here. 2026-03-31 03:32:34.529092 | orchestrator | 2026-03-31 03:32:34.529204 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-31 03:32:34.529219 | orchestrator | 2026-03-31 03:32:34.529230 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-31 03:32:34.529240 | orchestrator | Tuesday 31 March 2026 03:32:05 +0000 (0:00:00.273) 0:00:00.273 ********* 2026-03-31 03:32:34.529250 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:32:34.529261 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:32:34.529271 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:32:34.529281 | orchestrator | 2026-03-31 03:32:34.529291 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-31 03:32:34.529301 | orchestrator | Tuesday 31 March 2026 03:32:05 +0000 (0:00:00.308) 0:00:00.581 ********* 2026-03-31 03:32:34.529361 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-03-31 03:32:34.529374 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-03-31 03:32:34.529383 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-03-31 03:32:34.529393 | orchestrator | 2026-03-31 03:32:34.529403 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-03-31 03:32:34.529412 | orchestrator | 2026-03-31 03:32:34.529422 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-31 03:32:34.529432 | orchestrator | Tuesday 31 March 2026 03:32:06 +0000 (0:00:00.464) 0:00:01.045 ********* 2026-03-31 03:32:34.529441 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 03:32:34.529452 | orchestrator | 2026-03-31 03:32:34.529463 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-03-31 03:32:34.529473 | orchestrator | Tuesday 31 March 2026 03:32:06 +0000 (0:00:00.574) 0:00:01.619 ********* 2026-03-31 03:32:34.529483 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-03-31 03:32:34.529493 | orchestrator | 2026-03-31 03:32:34.529502 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-03-31 03:32:34.529534 | orchestrator | Tuesday 31 March 2026 03:32:09 +0000 (0:00:03.271) 0:00:04.891 ********* 2026-03-31 03:32:34.529549 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-03-31 03:32:34.529566 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-03-31 03:32:34.529584 | orchestrator | 2026-03-31 03:32:34.529601 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-03-31 03:32:34.529618 | orchestrator | Tuesday 31 March 2026 03:32:15 +0000 (0:00:06.002) 0:00:10.893 ********* 2026-03-31 03:32:34.529634 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-31 03:32:34.529652 | orchestrator | 2026-03-31 03:32:34.529668 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-03-31 03:32:34.529685 | orchestrator | Tuesday 31 March 2026 03:32:18 +0000 (0:00:03.117) 0:00:14.011 ********* 2026-03-31 03:32:34.529701 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-31 03:32:34.529717 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-03-31 03:32:34.529735 | orchestrator | 2026-03-31 03:32:34.529754 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-03-31 03:32:34.529770 | orchestrator | Tuesday 31 March 2026 03:32:22 +0000 (0:00:03.742) 0:00:17.754 ********* 2026-03-31 03:32:34.529786 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-31 03:32:34.529803 | orchestrator | 2026-03-31 03:32:34.529819 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-03-31 03:32:34.529834 | orchestrator | Tuesday 31 March 2026 03:32:25 +0000 (0:00:03.058) 0:00:20.812 ********* 2026-03-31 03:32:34.529859 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-03-31 03:32:34.529869 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-03-31 03:32:34.529878 | orchestrator | 2026-03-31 03:32:34.529888 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-03-31 03:32:34.529898 | orchestrator | Tuesday 31 March 2026 03:32:32 +0000 (0:00:06.773) 0:00:27.585 ********* 2026-03-31 03:32:34.529911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-31 03:32:34.529946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-31 03:32:34.529958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-31 03:32:34.529978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-31 03:32:34.529994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-31 03:32:34.530005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-31 03:32:34.530070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-31 03:32:34.530092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-31 03:32:40.455895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-31 03:32:40.455998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-31 03:32:40.456029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-31 03:32:40.456040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-31 03:32:40.456056 | orchestrator | 2026-03-31 03:32:40.456072 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-31 03:32:40.456088 | orchestrator | Tuesday 31 March 2026 03:32:34 +0000 (0:00:02.062) 0:00:29.648 ********* 2026-03-31 03:32:40.456102 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:32:40.456119 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:32:40.456134 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:32:40.456150 | orchestrator | 2026-03-31 03:32:40.456164 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-31 03:32:40.456180 | orchestrator | Tuesday 31 March 2026 03:32:35 +0000 (0:00:00.540) 0:00:30.188 ********* 2026-03-31 03:32:40.456189 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 03:32:40.456198 | orchestrator | 2026-03-31 03:32:40.456228 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-03-31 03:32:40.456238 | orchestrator | Tuesday 31 March 2026 03:32:35 +0000 (0:00:00.555) 0:00:30.744 ********* 2026-03-31 03:32:40.456247 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-03-31 03:32:40.456256 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-03-31 03:32:40.456264 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-03-31 03:32:40.456272 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-03-31 03:32:40.456281 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-03-31 03:32:40.456289 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-03-31 03:32:40.456297 | orchestrator | 2026-03-31 03:32:40.456357 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-03-31 03:32:40.456367 | orchestrator | Tuesday 31 March 2026 03:32:37 +0000 (0:00:01.691) 0:00:32.435 ********* 2026-03-31 03:32:40.456394 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-31 03:32:40.456407 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-31 03:32:40.456423 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-31 03:32:40.456433 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-31 03:32:40.456470 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-31 03:32:51.703637 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-31 03:32:51.703743 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-31 03:32:51.703773 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-31 03:32:51.703784 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-31 03:32:51.703813 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-31 03:32:51.703839 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-31 03:32:51.703849 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-31 03:32:51.703859 | orchestrator | 2026-03-31 03:32:51.703870 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-03-31 03:32:51.703880 | orchestrator | Tuesday 31 March 2026 03:32:40 +0000 (0:00:03.353) 0:00:35.789 ********* 2026-03-31 03:32:51.703889 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-31 03:32:51.703899 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-31 03:32:51.703908 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-31 03:32:51.703917 | orchestrator | 2026-03-31 03:32:51.703926 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-03-31 03:32:51.703939 | orchestrator | Tuesday 31 March 2026 03:32:42 +0000 (0:00:01.630) 0:00:37.420 ********* 2026-03-31 03:32:51.703950 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-03-31 03:32:51.703960 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-03-31 03:32:51.703969 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-03-31 03:32:51.703977 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-03-31 03:32:51.703986 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-03-31 03:32:51.704002 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-03-31 03:32:51.704010 | orchestrator | 2026-03-31 03:32:51.704019 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-03-31 03:32:51.704028 | orchestrator | Tuesday 31 March 2026 03:32:45 +0000 (0:00:02.885) 0:00:40.306 ********* 2026-03-31 03:32:51.704037 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-03-31 03:32:51.704046 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-03-31 03:32:51.704055 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-03-31 03:32:51.704064 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-03-31 03:32:51.704072 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-03-31 03:32:51.704081 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-03-31 03:32:51.704090 | orchestrator | 2026-03-31 03:32:51.704098 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-03-31 03:32:51.704107 | orchestrator | Tuesday 31 March 2026 03:32:46 +0000 (0:00:01.055) 0:00:41.362 ********* 2026-03-31 03:32:51.704116 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:32:51.704124 | orchestrator | 2026-03-31 03:32:51.704133 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-03-31 03:32:51.704142 | orchestrator | Tuesday 31 March 2026 03:32:46 +0000 (0:00:00.151) 0:00:41.513 ********* 2026-03-31 03:32:51.704150 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:32:51.704159 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:32:51.704169 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:32:51.704180 | orchestrator | 2026-03-31 03:32:51.704189 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-31 03:32:51.704199 | orchestrator | Tuesday 31 March 2026 03:32:47 +0000 (0:00:00.540) 0:00:42.054 ********* 2026-03-31 03:32:51.704210 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 03:32:51.704220 | orchestrator | 2026-03-31 03:32:51.704230 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-03-31 03:32:51.704239 | orchestrator | Tuesday 31 March 2026 03:32:47 +0000 (0:00:00.586) 0:00:42.641 ********* 2026-03-31 03:32:51.704257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-31 03:32:52.769235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-31 03:32:52.769451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-31 03:32:52.769470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-31 03:32:52.769481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-31 03:32:52.769490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-31 03:32:52.769518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-31 03:32:52.769530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-31 03:32:52.769550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-31 03:32:52.769560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-31 03:32:52.769570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-31 03:32:52.769579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-31 03:32:52.769588 | orchestrator | 2026-03-31 03:32:52.769599 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-03-31 03:32:52.769609 | orchestrator | Tuesday 31 March 2026 03:32:51 +0000 (0:00:04.183) 0:00:46.825 ********* 2026-03-31 03:32:52.769626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-31 03:32:52.881279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 03:32:52.881435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-31 03:32:52.881450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-31 03:32:52.881459 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:32:52.881470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-31 03:32:52.881480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 03:32:52.881524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-31 03:32:52.881540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-31 03:32:52.881548 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:32:52.881557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-31 03:32:52.881565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 03:32:52.881573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-31 03:32:52.881582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-31 03:32:52.881596 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:32:52.881604 | orchestrator | 2026-03-31 03:32:52.881613 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-03-31 03:32:52.881628 | orchestrator | Tuesday 31 March 2026 03:32:52 +0000 (0:00:01.077) 0:00:47.903 ********* 2026-03-31 03:32:53.509913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-31 03:32:53.510078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 03:32:53.510102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-31 03:32:53.510115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-31 03:32:53.510128 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:32:53.510143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-31 03:32:53.510203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 03:32:53.510223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-31 03:32:53.510236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-31 03:32:53.510247 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:32:53.510260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-31 03:32:53.510272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 03:32:53.510339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-31 03:32:58.234013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-31 03:32:58.234320 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:32:58.234353 | orchestrator | 2026-03-31 03:32:58.234376 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-03-31 03:32:58.234389 | orchestrator | Tuesday 31 March 2026 03:32:53 +0000 (0:00:00.995) 0:00:48.898 ********* 2026-03-31 03:32:58.234403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-31 03:32:58.234418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-31 03:32:58.234456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-31 03:32:58.234489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-31 03:32:58.234514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-31 03:32:58.234528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-31 03:32:58.234540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-31 03:32:58.234555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-31 03:32:58.234576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-31 03:32:58.234596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-31 03:33:12.258325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-31 03:33:12.258411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-31 03:33:12.258418 | orchestrator | 2026-03-31 03:33:12.258425 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-03-31 03:33:12.258431 | orchestrator | Tuesday 31 March 2026 03:32:58 +0000 (0:00:04.474) 0:00:53.373 ********* 2026-03-31 03:33:12.258436 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-31 03:33:12.258442 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-31 03:33:12.258446 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-31 03:33:12.258451 | orchestrator | 2026-03-31 03:33:12.258455 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-03-31 03:33:12.258474 | orchestrator | Tuesday 31 March 2026 03:33:00 +0000 (0:00:02.003) 0:00:55.377 ********* 2026-03-31 03:33:12.258480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-31 03:33:12.258487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-31 03:33:12.258508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-31 03:33:12.258513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-31 03:33:12.258519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-31 03:33:12.258528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-31 03:33:12.258533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-31 03:33:12.258539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-31 03:33:12.258550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-31 03:33:15.047815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-31 03:33:15.047930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-31 03:33:15.047981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-31 03:33:15.047994 | orchestrator | 2026-03-31 03:33:15.048004 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-03-31 03:33:15.048015 | orchestrator | Tuesday 31 March 2026 03:33:12 +0000 (0:00:11.994) 0:01:07.371 ********* 2026-03-31 03:33:15.048024 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:33:15.048034 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:33:15.048043 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:33:15.048051 | orchestrator | 2026-03-31 03:33:15.048060 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-03-31 03:33:15.048068 | orchestrator | Tuesday 31 March 2026 03:33:14 +0000 (0:00:01.663) 0:01:09.034 ********* 2026-03-31 03:33:15.048079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-31 03:33:15.048103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 03:33:15.048131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-31 03:33:15.048149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-31 03:33:15.048158 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:33:15.048167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-31 03:33:15.048177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 03:33:15.048185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-31 03:33:15.048206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-31 03:33:18.945261 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:33:18.945433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-31 03:33:18.945479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 03:33:18.945494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-31 03:33:18.945508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-31 03:33:18.945520 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:33:18.945532 | orchestrator | 2026-03-31 03:33:18.945544 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-03-31 03:33:18.945557 | orchestrator | Tuesday 31 March 2026 03:33:15 +0000 (0:00:01.136) 0:01:10.171 ********* 2026-03-31 03:33:18.945567 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:33:18.945578 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:33:18.945589 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:33:18.945600 | orchestrator | 2026-03-31 03:33:18.945611 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-03-31 03:33:18.945638 | orchestrator | Tuesday 31 March 2026 03:33:15 +0000 (0:00:00.778) 0:01:10.949 ********* 2026-03-31 03:33:18.945670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-31 03:33:18.945692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-31 03:33:18.945708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-31 03:33:18.945729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-31 03:33:18.945750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-31 03:33:18.945795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-31 03:33:18.945859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-31 03:34:54.412947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-31 03:34:54.413106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-31 03:34:54.413133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-31 03:34:54.413149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-31 03:34:54.413258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-31 03:34:54.413280 | orchestrator | 2026-03-31 03:34:54.413297 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-31 03:34:54.413313 | orchestrator | Tuesday 31 March 2026 03:33:19 +0000 (0:00:03.122) 0:01:14.072 ********* 2026-03-31 03:34:54.413327 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:34:54.413339 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:34:54.413347 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:34:54.413355 | orchestrator | 2026-03-31 03:34:54.413364 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-03-31 03:34:54.413372 | orchestrator | Tuesday 31 March 2026 03:33:19 +0000 (0:00:00.330) 0:01:14.402 ********* 2026-03-31 03:34:54.413380 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:34:54.413388 | orchestrator | 2026-03-31 03:34:54.413413 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-03-31 03:34:54.413422 | orchestrator | Tuesday 31 March 2026 03:33:21 +0000 (0:00:02.061) 0:01:16.464 ********* 2026-03-31 03:34:54.413430 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:34:54.413437 | orchestrator | 2026-03-31 03:34:54.413445 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-03-31 03:34:54.413455 | orchestrator | Tuesday 31 March 2026 03:33:23 +0000 (0:00:02.212) 0:01:18.676 ********* 2026-03-31 03:34:54.413463 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:34:54.413472 | orchestrator | 2026-03-31 03:34:54.413481 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-31 03:34:54.413490 | orchestrator | Tuesday 31 March 2026 03:33:42 +0000 (0:00:19.285) 0:01:37.962 ********* 2026-03-31 03:34:54.413499 | orchestrator | 2026-03-31 03:34:54.413507 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-31 03:34:54.413517 | orchestrator | Tuesday 31 March 2026 03:33:43 +0000 (0:00:00.076) 0:01:38.039 ********* 2026-03-31 03:34:54.413525 | orchestrator | 2026-03-31 03:34:54.413534 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-31 03:34:54.413543 | orchestrator | Tuesday 31 March 2026 03:33:43 +0000 (0:00:00.077) 0:01:38.116 ********* 2026-03-31 03:34:54.413552 | orchestrator | 2026-03-31 03:34:54.413561 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-03-31 03:34:54.413570 | orchestrator | Tuesday 31 March 2026 03:33:43 +0000 (0:00:00.079) 0:01:38.196 ********* 2026-03-31 03:34:54.413579 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:34:54.413588 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:34:54.413597 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:34:54.413606 | orchestrator | 2026-03-31 03:34:54.413615 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-03-31 03:34:54.413624 | orchestrator | Tuesday 31 March 2026 03:34:13 +0000 (0:00:29.972) 0:02:08.168 ********* 2026-03-31 03:34:54.413632 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:34:54.413641 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:34:54.413650 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:34:54.413659 | orchestrator | 2026-03-31 03:34:54.413668 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-03-31 03:34:54.413678 | orchestrator | Tuesday 31 March 2026 03:34:18 +0000 (0:00:05.745) 0:02:13.914 ********* 2026-03-31 03:34:54.413692 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:34:54.413715 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:34:54.413728 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:34:54.413742 | orchestrator | 2026-03-31 03:34:54.413756 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-03-31 03:34:54.413770 | orchestrator | Tuesday 31 March 2026 03:34:45 +0000 (0:00:26.583) 0:02:40.497 ********* 2026-03-31 03:34:54.413783 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:34:54.413793 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:34:54.413801 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:34:54.413809 | orchestrator | 2026-03-31 03:34:54.413816 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-03-31 03:34:54.413825 | orchestrator | Tuesday 31 March 2026 03:34:54 +0000 (0:00:08.601) 0:02:49.099 ********* 2026-03-31 03:34:54.413832 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:34:54.413840 | orchestrator | 2026-03-31 03:34:54.413848 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 03:34:54.413856 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-31 03:34:54.413865 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-31 03:34:54.413874 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-31 03:34:54.413881 | orchestrator | 2026-03-31 03:34:54.413889 | orchestrator | 2026-03-31 03:34:54.413897 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 03:34:54.413905 | orchestrator | Tuesday 31 March 2026 03:34:54 +0000 (0:00:00.323) 0:02:49.422 ********* 2026-03-31 03:34:54.413919 | orchestrator | =============================================================================== 2026-03-31 03:34:54.413926 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 29.97s 2026-03-31 03:34:54.413934 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 26.58s 2026-03-31 03:34:54.413942 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 19.29s 2026-03-31 03:34:54.413950 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 11.99s 2026-03-31 03:34:54.413957 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 8.60s 2026-03-31 03:34:54.413965 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 6.77s 2026-03-31 03:34:54.413973 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.00s 2026-03-31 03:34:54.413980 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 5.75s 2026-03-31 03:34:54.413988 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.47s 2026-03-31 03:34:54.413996 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.18s 2026-03-31 03:34:54.414003 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.74s 2026-03-31 03:34:54.414011 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.35s 2026-03-31 03:34:54.414073 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.27s 2026-03-31 03:34:54.414081 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.12s 2026-03-31 03:34:54.414097 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.12s 2026-03-31 03:34:55.055618 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.06s 2026-03-31 03:34:55.055747 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.89s 2026-03-31 03:34:55.055773 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.21s 2026-03-31 03:34:55.055794 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.06s 2026-03-31 03:34:55.055813 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.06s 2026-03-31 03:34:58.060848 | orchestrator | 2026-03-31 03:34:58 | INFO  | Task 9b6cdad8-a122-465f-ba72-53caa192e5cc (barbican) was prepared for execution. 2026-03-31 03:34:58.060951 | orchestrator | 2026-03-31 03:34:58 | INFO  | It takes a moment until task 9b6cdad8-a122-465f-ba72-53caa192e5cc (barbican) has been started and output is visible here. 2026-03-31 03:35:40.943045 | orchestrator | 2026-03-31 03:35:40.943159 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-31 03:35:40.943259 | orchestrator | 2026-03-31 03:35:40.943276 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-31 03:35:40.943289 | orchestrator | Tuesday 31 March 2026 03:35:03 +0000 (0:00:00.295) 0:00:00.295 ********* 2026-03-31 03:35:40.943301 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:35:40.943313 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:35:40.943324 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:35:40.943335 | orchestrator | 2026-03-31 03:35:40.943346 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-31 03:35:40.943358 | orchestrator | Tuesday 31 March 2026 03:35:03 +0000 (0:00:00.366) 0:00:00.662 ********* 2026-03-31 03:35:40.943369 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-03-31 03:35:40.943380 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-03-31 03:35:40.943391 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-03-31 03:35:40.943402 | orchestrator | 2026-03-31 03:35:40.943413 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-03-31 03:35:40.943424 | orchestrator | 2026-03-31 03:35:40.943434 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-31 03:35:40.943445 | orchestrator | Tuesday 31 March 2026 03:35:03 +0000 (0:00:00.562) 0:00:01.224 ********* 2026-03-31 03:35:40.943457 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 03:35:40.943468 | orchestrator | 2026-03-31 03:35:40.943479 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-03-31 03:35:40.943489 | orchestrator | Tuesday 31 March 2026 03:35:04 +0000 (0:00:00.677) 0:00:01.901 ********* 2026-03-31 03:35:40.943501 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-03-31 03:35:40.943512 | orchestrator | 2026-03-31 03:35:40.943523 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-03-31 03:35:40.943534 | orchestrator | Tuesday 31 March 2026 03:35:07 +0000 (0:00:03.335) 0:00:05.237 ********* 2026-03-31 03:35:40.943544 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-03-31 03:35:40.943557 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-03-31 03:35:40.943569 | orchestrator | 2026-03-31 03:35:40.943582 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-03-31 03:35:40.943595 | orchestrator | Tuesday 31 March 2026 03:35:14 +0000 (0:00:06.147) 0:00:11.385 ********* 2026-03-31 03:35:40.943608 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-31 03:35:40.943620 | orchestrator | 2026-03-31 03:35:40.943633 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-03-31 03:35:40.943645 | orchestrator | Tuesday 31 March 2026 03:35:17 +0000 (0:00:03.034) 0:00:14.419 ********* 2026-03-31 03:35:40.943657 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-31 03:35:40.943687 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-03-31 03:35:40.943701 | orchestrator | 2026-03-31 03:35:40.943713 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-03-31 03:35:40.943725 | orchestrator | Tuesday 31 March 2026 03:35:21 +0000 (0:00:03.950) 0:00:18.370 ********* 2026-03-31 03:35:40.943738 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-31 03:35:40.943750 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-03-31 03:35:40.943790 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-03-31 03:35:40.943808 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-03-31 03:35:40.943825 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-03-31 03:35:40.943842 | orchestrator | 2026-03-31 03:35:40.943860 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-03-31 03:35:40.943878 | orchestrator | Tuesday 31 March 2026 03:35:35 +0000 (0:00:14.541) 0:00:32.911 ********* 2026-03-31 03:35:40.943895 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-03-31 03:35:40.943913 | orchestrator | 2026-03-31 03:35:40.943931 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-03-31 03:35:40.943950 | orchestrator | Tuesday 31 March 2026 03:35:39 +0000 (0:00:03.621) 0:00:36.532 ********* 2026-03-31 03:35:40.943972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-31 03:35:40.944023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-31 03:35:40.944045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-31 03:35:40.944075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-31 03:35:40.944113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-31 03:35:40.944134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-31 03:35:40.944168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:35:47.322573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:35:47.322669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:35:47.322682 | orchestrator | 2026-03-31 03:35:47.322693 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-03-31 03:35:47.322703 | orchestrator | Tuesday 31 March 2026 03:35:40 +0000 (0:00:01.661) 0:00:38.194 ********* 2026-03-31 03:35:47.322712 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-03-31 03:35:47.322720 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-03-31 03:35:47.322748 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-03-31 03:35:47.322757 | orchestrator | 2026-03-31 03:35:47.322765 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-03-31 03:35:47.322773 | orchestrator | Tuesday 31 March 2026 03:35:42 +0000 (0:00:01.270) 0:00:39.465 ********* 2026-03-31 03:35:47.322781 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:35:47.322789 | orchestrator | 2026-03-31 03:35:47.322808 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-03-31 03:35:47.322816 | orchestrator | Tuesday 31 March 2026 03:35:42 +0000 (0:00:00.427) 0:00:39.892 ********* 2026-03-31 03:35:47.322824 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:35:47.322832 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:35:47.322840 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:35:47.322848 | orchestrator | 2026-03-31 03:35:47.322856 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-31 03:35:47.322864 | orchestrator | Tuesday 31 March 2026 03:35:42 +0000 (0:00:00.333) 0:00:40.226 ********* 2026-03-31 03:35:47.322872 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 03:35:47.322880 | orchestrator | 2026-03-31 03:35:47.322888 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-03-31 03:35:47.322896 | orchestrator | Tuesday 31 March 2026 03:35:43 +0000 (0:00:00.624) 0:00:40.850 ********* 2026-03-31 03:35:47.322906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-31 03:35:47.322930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-31 03:35:47.322940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-31 03:35:47.322959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-31 03:35:47.322970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-31 03:35:47.322978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-31 03:35:47.322987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:35:47.323002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:35:48.879267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:35:48.879363 | orchestrator | 2026-03-31 03:35:48.879373 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-03-31 03:35:48.879380 | orchestrator | Tuesday 31 March 2026 03:35:47 +0000 (0:00:03.727) 0:00:44.578 ********* 2026-03-31 03:35:48.879400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-31 03:35:48.879407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-31 03:35:48.879413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-31 03:35:48.879419 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:35:48.879426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-31 03:35:48.879443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-31 03:35:48.879453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-31 03:35:48.879458 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:35:48.879467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-31 03:35:48.879472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-31 03:35:48.879478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-31 03:35:48.879483 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:35:48.879488 | orchestrator | 2026-03-31 03:35:48.879493 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-03-31 03:35:48.879498 | orchestrator | Tuesday 31 March 2026 03:35:47 +0000 (0:00:00.688) 0:00:45.267 ********* 2026-03-31 03:35:48.879510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-31 03:35:52.550433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-31 03:35:52.550562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-31 03:35:52.550582 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:35:52.550597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-31 03:35:52.550610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-31 03:35:52.550621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-31 03:35:52.550656 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:35:52.550689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-31 03:35:52.550708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-31 03:35:52.550720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-31 03:35:52.550731 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:35:52.550743 | orchestrator | 2026-03-31 03:35:52.550754 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-03-31 03:35:52.550767 | orchestrator | Tuesday 31 March 2026 03:35:48 +0000 (0:00:00.878) 0:00:46.145 ********* 2026-03-31 03:35:52.550778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-31 03:35:52.550791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-31 03:35:52.550849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-31 03:36:02.612760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-31 03:36:02.612869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-31 03:36:02.612883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-31 03:36:02.612894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:36:02.612926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:36:02.612938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:36:02.612948 | orchestrator | 2026-03-31 03:36:02.612960 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-03-31 03:36:02.612971 | orchestrator | Tuesday 31 March 2026 03:35:52 +0000 (0:00:03.663) 0:00:49.808 ********* 2026-03-31 03:36:02.612981 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:36:02.612993 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:36:02.613002 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:36:02.613012 | orchestrator | 2026-03-31 03:36:02.613036 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-03-31 03:36:02.613068 | orchestrator | Tuesday 31 March 2026 03:35:54 +0000 (0:00:01.579) 0:00:51.388 ********* 2026-03-31 03:36:02.613078 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-31 03:36:02.613098 | orchestrator | 2026-03-31 03:36:02.613108 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-03-31 03:36:02.613117 | orchestrator | Tuesday 31 March 2026 03:35:55 +0000 (0:00:01.069) 0:00:52.457 ********* 2026-03-31 03:36:02.613127 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:36:02.613137 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:36:02.613146 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:36:02.613155 | orchestrator | 2026-03-31 03:36:02.613199 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-03-31 03:36:02.613209 | orchestrator | Tuesday 31 March 2026 03:35:55 +0000 (0:00:00.614) 0:00:53.072 ********* 2026-03-31 03:36:02.613252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-31 03:36:02.613274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-31 03:36:02.613285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-31 03:36:02.613303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-31 03:36:03.519253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-31 03:36:03.519330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-31 03:36:03.519339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:36:03.519362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:36:03.519368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:36:03.519374 | orchestrator | 2026-03-31 03:36:03.519381 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-03-31 03:36:03.519388 | orchestrator | Tuesday 31 March 2026 03:36:02 +0000 (0:00:06.807) 0:00:59.879 ********* 2026-03-31 03:36:03.519406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-31 03:36:03.519416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-31 03:36:03.519422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-31 03:36:03.519438 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:36:03.519445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-31 03:36:03.519452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-31 03:36:03.519458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-31 03:36:03.519463 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:36:03.519477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-31 03:36:05.643956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-31 03:36:05.644089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-31 03:36:05.644107 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:36:05.644121 | orchestrator | 2026-03-31 03:36:05.644133 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-03-31 03:36:05.644145 | orchestrator | Tuesday 31 March 2026 03:36:03 +0000 (0:00:00.904) 0:01:00.783 ********* 2026-03-31 03:36:05.644157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-31 03:36:05.644221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-31 03:36:05.644268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-31 03:36:05.644300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-31 03:36:05.644321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-31 03:36:05.644340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-31 03:36:05.644358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:36:05.644379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:36:05.644406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:36:05.644429 | orchestrator | 2026-03-31 03:36:05.644441 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-31 03:36:05.644460 | orchestrator | Tuesday 31 March 2026 03:36:05 +0000 (0:00:02.121) 0:01:02.905 ********* 2026-03-31 03:36:38.393438 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:36:38.393533 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:36:38.393543 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:36:38.393550 | orchestrator | 2026-03-31 03:36:38.393558 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-03-31 03:36:38.393565 | orchestrator | Tuesday 31 March 2026 03:36:05 +0000 (0:00:00.339) 0:01:03.245 ********* 2026-03-31 03:36:38.393572 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:36:38.393578 | orchestrator | 2026-03-31 03:36:38.393585 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-03-31 03:36:38.393591 | orchestrator | Tuesday 31 March 2026 03:36:07 +0000 (0:00:01.951) 0:01:05.197 ********* 2026-03-31 03:36:38.393597 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:36:38.393603 | orchestrator | 2026-03-31 03:36:38.393610 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-03-31 03:36:38.393616 | orchestrator | Tuesday 31 March 2026 03:36:10 +0000 (0:00:02.151) 0:01:07.348 ********* 2026-03-31 03:36:38.393622 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:36:38.393628 | orchestrator | 2026-03-31 03:36:38.393634 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-31 03:36:38.393641 | orchestrator | Tuesday 31 March 2026 03:36:21 +0000 (0:00:11.532) 0:01:18.880 ********* 2026-03-31 03:36:38.393647 | orchestrator | 2026-03-31 03:36:38.393653 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-31 03:36:38.393659 | orchestrator | Tuesday 31 March 2026 03:36:21 +0000 (0:00:00.063) 0:01:18.944 ********* 2026-03-31 03:36:38.393665 | orchestrator | 2026-03-31 03:36:38.393671 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-31 03:36:38.393677 | orchestrator | Tuesday 31 March 2026 03:36:21 +0000 (0:00:00.064) 0:01:19.009 ********* 2026-03-31 03:36:38.393684 | orchestrator | 2026-03-31 03:36:38.393690 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-03-31 03:36:38.393696 | orchestrator | Tuesday 31 March 2026 03:36:21 +0000 (0:00:00.065) 0:01:19.074 ********* 2026-03-31 03:36:38.393702 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:36:38.393708 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:36:38.393714 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:36:38.393720 | orchestrator | 2026-03-31 03:36:38.393727 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-03-31 03:36:38.393733 | orchestrator | Tuesday 31 March 2026 03:36:28 +0000 (0:00:06.329) 0:01:25.404 ********* 2026-03-31 03:36:38.393739 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:36:38.393745 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:36:38.393751 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:36:38.393757 | orchestrator | 2026-03-31 03:36:38.393763 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-03-31 03:36:38.393769 | orchestrator | Tuesday 31 March 2026 03:36:32 +0000 (0:00:04.750) 0:01:30.154 ********* 2026-03-31 03:36:38.393776 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:36:38.393782 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:36:38.393788 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:36:38.393794 | orchestrator | 2026-03-31 03:36:38.393800 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 03:36:38.393808 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-31 03:36:38.393815 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-31 03:36:38.393821 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-31 03:36:38.393847 | orchestrator | 2026-03-31 03:36:38.393853 | orchestrator | 2026-03-31 03:36:38.393860 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 03:36:38.393866 | orchestrator | Tuesday 31 March 2026 03:36:37 +0000 (0:00:05.112) 0:01:35.266 ********* 2026-03-31 03:36:38.393872 | orchestrator | =============================================================================== 2026-03-31 03:36:38.393878 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 14.54s 2026-03-31 03:36:38.393884 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.53s 2026-03-31 03:36:38.393891 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 6.81s 2026-03-31 03:36:38.393897 | orchestrator | barbican : Restart barbican-api container ------------------------------- 6.33s 2026-03-31 03:36:38.393903 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.15s 2026-03-31 03:36:38.393909 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 5.11s 2026-03-31 03:36:38.393915 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 4.75s 2026-03-31 03:36:38.393921 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.95s 2026-03-31 03:36:38.393927 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.73s 2026-03-31 03:36:38.393933 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.66s 2026-03-31 03:36:38.393939 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.62s 2026-03-31 03:36:38.393946 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.34s 2026-03-31 03:36:38.393963 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.03s 2026-03-31 03:36:38.393969 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.15s 2026-03-31 03:36:38.393976 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.12s 2026-03-31 03:36:38.393995 | orchestrator | barbican : Creating barbican database ----------------------------------- 1.95s 2026-03-31 03:36:38.394003 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.66s 2026-03-31 03:36:38.394010 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 1.58s 2026-03-31 03:36:38.394070 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.27s 2026-03-31 03:36:38.394078 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.07s 2026-03-31 03:36:41.137862 | orchestrator | 2026-03-31 03:36:41 | INFO  | Task 93226acd-7f28-4020-9d7e-834419bf68a2 (designate) was prepared for execution. 2026-03-31 03:36:41.137942 | orchestrator | 2026-03-31 03:36:41 | INFO  | It takes a moment until task 93226acd-7f28-4020-9d7e-834419bf68a2 (designate) has been started and output is visible here. 2026-03-31 03:37:11.530301 | orchestrator | 2026-03-31 03:37:11.530407 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-31 03:37:11.530422 | orchestrator | 2026-03-31 03:37:11.530432 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-31 03:37:11.530442 | orchestrator | Tuesday 31 March 2026 03:36:45 +0000 (0:00:00.280) 0:00:00.280 ********* 2026-03-31 03:37:11.530451 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:37:11.530461 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:37:11.530470 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:37:11.530478 | orchestrator | 2026-03-31 03:37:11.530487 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-31 03:37:11.530496 | orchestrator | Tuesday 31 March 2026 03:36:45 +0000 (0:00:00.323) 0:00:00.604 ********* 2026-03-31 03:37:11.530505 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-03-31 03:37:11.530514 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-03-31 03:37:11.530523 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-03-31 03:37:11.530553 | orchestrator | 2026-03-31 03:37:11.530563 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-03-31 03:37:11.530571 | orchestrator | 2026-03-31 03:37:11.530580 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-31 03:37:11.530589 | orchestrator | Tuesday 31 March 2026 03:36:46 +0000 (0:00:00.465) 0:00:01.069 ********* 2026-03-31 03:37:11.530598 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 03:37:11.530608 | orchestrator | 2026-03-31 03:37:11.530616 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-03-31 03:37:11.530625 | orchestrator | Tuesday 31 March 2026 03:36:46 +0000 (0:00:00.603) 0:00:01.673 ********* 2026-03-31 03:37:11.530633 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-03-31 03:37:11.530642 | orchestrator | 2026-03-31 03:37:11.530651 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-03-31 03:37:11.530659 | orchestrator | Tuesday 31 March 2026 03:36:50 +0000 (0:00:03.207) 0:00:04.880 ********* 2026-03-31 03:37:11.530668 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-03-31 03:37:11.530677 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-03-31 03:37:11.530685 | orchestrator | 2026-03-31 03:37:11.530694 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-03-31 03:37:11.530703 | orchestrator | Tuesday 31 March 2026 03:36:56 +0000 (0:00:05.996) 0:00:10.877 ********* 2026-03-31 03:37:11.530711 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-31 03:37:11.530720 | orchestrator | 2026-03-31 03:37:11.530729 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-03-31 03:37:11.530737 | orchestrator | Tuesday 31 March 2026 03:36:59 +0000 (0:00:02.959) 0:00:13.837 ********* 2026-03-31 03:37:11.530746 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-31 03:37:11.530754 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-03-31 03:37:11.530763 | orchestrator | 2026-03-31 03:37:11.530772 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-03-31 03:37:11.530781 | orchestrator | Tuesday 31 March 2026 03:37:02 +0000 (0:00:03.735) 0:00:17.573 ********* 2026-03-31 03:37:11.530790 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-31 03:37:11.530799 | orchestrator | 2026-03-31 03:37:11.530807 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-03-31 03:37:11.530816 | orchestrator | Tuesday 31 March 2026 03:37:05 +0000 (0:00:03.051) 0:00:20.624 ********* 2026-03-31 03:37:11.530824 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-03-31 03:37:11.530833 | orchestrator | 2026-03-31 03:37:11.530841 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-03-31 03:37:11.530851 | orchestrator | Tuesday 31 March 2026 03:37:09 +0000 (0:00:03.519) 0:00:24.144 ********* 2026-03-31 03:37:11.530877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-31 03:37:11.530912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-31 03:37:11.530932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-31 03:37:11.530944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-31 03:37:11.530956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-31 03:37:11.530971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-31 03:37:11.530982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:11.531006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:17.730780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:17.730911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:17.730932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:17.730943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:17.730969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:17.731001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:17.731027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:17.731036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:17.731044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:17.731053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:17.731061 | orchestrator | 2026-03-31 03:37:17.731071 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-03-31 03:37:17.731081 | orchestrator | Tuesday 31 March 2026 03:37:12 +0000 (0:00:02.887) 0:00:27.031 ********* 2026-03-31 03:37:17.731089 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:37:17.731098 | orchestrator | 2026-03-31 03:37:17.731106 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-03-31 03:37:17.731115 | orchestrator | Tuesday 31 March 2026 03:37:12 +0000 (0:00:00.140) 0:00:27.172 ********* 2026-03-31 03:37:17.731152 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:37:17.731160 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:37:17.731168 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:37:17.731182 | orchestrator | 2026-03-31 03:37:17.731190 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-31 03:37:17.731198 | orchestrator | Tuesday 31 March 2026 03:37:13 +0000 (0:00:00.530) 0:00:27.702 ********* 2026-03-31 03:37:17.731211 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 03:37:17.731219 | orchestrator | 2026-03-31 03:37:17.731227 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-03-31 03:37:17.731234 | orchestrator | Tuesday 31 March 2026 03:37:13 +0000 (0:00:00.618) 0:00:28.321 ********* 2026-03-31 03:37:17.731244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-31 03:37:17.731261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-31 03:37:19.513061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-31 03:37:19.513223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-31 03:37:19.513262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-31 03:37:19.513297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-31 03:37:19.513310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:19.513342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:19.513355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:19.513366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:19.513379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:19.513404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:19.513416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:19.513428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:19.513449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:20.435979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:20.436083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:20.436192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:20.436209 | orchestrator | 2026-03-31 03:37:20.436223 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-03-31 03:37:20.436236 | orchestrator | Tuesday 31 March 2026 03:37:19 +0000 (0:00:05.869) 0:00:34.190 ********* 2026-03-31 03:37:20.436266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-31 03:37:20.436280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-31 03:37:20.436311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-31 03:37:20.436323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-31 03:37:20.436335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-31 03:37:20.436354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-31 03:37:20.436366 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:37:20.436385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-31 03:37:20.436397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-31 03:37:20.436408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-31 03:37:20.436427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-31 03:37:21.202350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-31 03:37:21.202475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-31 03:37:21.202503 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:37:21.202534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-31 03:37:21.202549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-31 03:37:21.202561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-31 03:37:21.202573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-31 03:37:21.202631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-31 03:37:21.202644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-31 03:37:21.202656 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:37:21.202668 | orchestrator | 2026-03-31 03:37:21.202680 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-03-31 03:37:21.202692 | orchestrator | Tuesday 31 March 2026 03:37:20 +0000 (0:00:01.040) 0:00:35.231 ********* 2026-03-31 03:37:21.202710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-31 03:37:21.202722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-31 03:37:21.202733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-31 03:37:21.202752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-31 03:37:21.552843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-31 03:37:21.552941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-31 03:37:21.552958 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:37:21.552987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-31 03:37:21.552999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-31 03:37:21.553011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-31 03:37:21.553042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-31 03:37:21.553070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-31 03:37:21.553081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-31 03:37:21.553092 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:37:21.553107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-31 03:37:21.553201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-31 03:37:21.553223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-31 03:37:21.553244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-31 03:37:21.553264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-31 03:37:25.902427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-31 03:37:25.902555 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:37:25.902576 | orchestrator | 2026-03-31 03:37:25.902590 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-03-31 03:37:25.902603 | orchestrator | Tuesday 31 March 2026 03:37:21 +0000 (0:00:00.998) 0:00:36.229 ********* 2026-03-31 03:37:25.902633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-31 03:37:25.902648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-31 03:37:25.902682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-31 03:37:25.902715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-31 03:37:25.902729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-31 03:37:25.902746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-31 03:37:25.902757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:25.902769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:25.902808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:25.902831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:25.902852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:37.591053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:37.591213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:37.591231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:37.591255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:37.591263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:37.591271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:37.591294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:37.591301 | orchestrator | 2026-03-31 03:37:37.591310 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-03-31 03:37:37.591318 | orchestrator | Tuesday 31 March 2026 03:37:27 +0000 (0:00:06.209) 0:00:42.439 ********* 2026-03-31 03:37:37.591329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-31 03:37:37.591337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-31 03:37:37.591350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-31 03:37:37.591357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-31 03:37:37.591371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-31 03:37:46.380055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-31 03:37:46.380182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:46.380223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:46.380238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:46.380250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:46.380265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:46.380297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:46.380319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:46.380332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:46.380351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:46.380359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:46.380366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:46.380373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:46.380381 | orchestrator | 2026-03-31 03:37:46.380389 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-03-31 03:37:46.380397 | orchestrator | Tuesday 31 March 2026 03:37:42 +0000 (0:00:14.649) 0:00:57.089 ********* 2026-03-31 03:37:46.380410 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-31 03:37:50.789704 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-31 03:37:50.789803 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-31 03:37:50.789815 | orchestrator | 2026-03-31 03:37:50.789825 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-03-31 03:37:50.789834 | orchestrator | Tuesday 31 March 2026 03:37:46 +0000 (0:00:03.970) 0:01:01.059 ********* 2026-03-31 03:37:50.789857 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-31 03:37:50.789865 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-31 03:37:50.789891 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-31 03:37:50.789899 | orchestrator | 2026-03-31 03:37:50.789907 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-03-31 03:37:50.789915 | orchestrator | Tuesday 31 March 2026 03:37:48 +0000 (0:00:02.444) 0:01:03.504 ********* 2026-03-31 03:37:50.789926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-31 03:37:50.789939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-31 03:37:50.789948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-31 03:37:50.789971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-31 03:37:50.789986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-31 03:37:50.790002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-31 03:37:50.790012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-31 03:37:50.790069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-31 03:37:50.790078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-31 03:37:50.790086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-31 03:37:50.790159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-31 03:37:53.647946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-31 03:37:53.648065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-31 03:37:53.648085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-31 03:37:53.648150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-31 03:37:53.648165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:53.648177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:53.648217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:53.648253 | orchestrator | 2026-03-31 03:37:53.648267 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-03-31 03:37:53.648279 | orchestrator | Tuesday 31 March 2026 03:37:51 +0000 (0:00:02.974) 0:01:06.478 ********* 2026-03-31 03:37:53.648297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-31 03:37:53.648320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-31 03:37:53.648342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-31 03:37:53.648365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-31 03:37:53.648415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-31 03:37:54.677256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-31 03:37:54.677396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-31 03:37:54.677420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-31 03:37:54.677439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-31 03:37:54.677456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-31 03:37:54.677473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-31 03:37:54.677565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-31 03:37:54.677588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-31 03:37:54.677606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-31 03:37:54.677623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-31 03:37:54.677640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:54.677657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:54.677685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:37:54.677702 | orchestrator | 2026-03-31 03:37:54.677724 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-31 03:37:54.677750 | orchestrator | Tuesday 31 March 2026 03:37:54 +0000 (0:00:02.874) 0:01:09.352 ********* 2026-03-31 03:37:55.704639 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:37:55.704759 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:37:55.704780 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:37:55.704796 | orchestrator | 2026-03-31 03:37:55.704808 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-03-31 03:37:55.704819 | orchestrator | Tuesday 31 March 2026 03:37:55 +0000 (0:00:00.335) 0:01:09.688 ********* 2026-03-31 03:37:55.704833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-31 03:37:55.704847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-31 03:37:55.704859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-31 03:37:55.704871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-31 03:37:55.704910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-31 03:37:55.704953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-31 03:37:55.704965 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:37:55.704976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-31 03:37:55.704987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-31 03:37:55.704997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-31 03:37:55.705017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-31 03:37:55.705028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-31 03:37:55.705050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-31 03:37:59.081022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-31 03:37:59.081139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-31 03:37:59.081147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-31 03:37:59.081169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-31 03:37:59.081176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-31 03:37:59.081194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-31 03:37:59.081200 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:37:59.081207 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:37:59.081213 | orchestrator | 2026-03-31 03:37:59.081219 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-03-31 03:37:59.081225 | orchestrator | Tuesday 31 March 2026 03:37:55 +0000 (0:00:00.815) 0:01:10.504 ********* 2026-03-31 03:37:59.081244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-31 03:37:59.081251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-31 03:37:59.081263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-31 03:37:59.081268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-31 03:37:59.081279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-31 03:37:59.081290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-31 03:38:05.204016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-31 03:38:05.204201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-31 03:38:05.204242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-31 03:38:05.204255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-31 03:38:05.204270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-31 03:38:05.204296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-31 03:38:05.204327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-31 03:38:05.204340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-31 03:38:05.204360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-31 03:38:05.204372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:38:05.204383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:38:05.204400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:38:05.204412 | orchestrator | 2026-03-31 03:38:05.204425 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-31 03:38:05.204437 | orchestrator | Tuesday 31 March 2026 03:38:00 +0000 (0:00:04.714) 0:01:15.218 ********* 2026-03-31 03:38:05.204449 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:38:05.204461 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:38:05.204471 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:38:05.204482 | orchestrator | 2026-03-31 03:38:05.204493 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-03-31 03:38:05.204504 | orchestrator | Tuesday 31 March 2026 03:38:00 +0000 (0:00:00.323) 0:01:15.541 ********* 2026-03-31 03:38:05.204515 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-03-31 03:38:05.204525 | orchestrator | 2026-03-31 03:38:05.204536 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-03-31 03:38:05.204547 | orchestrator | Tuesday 31 March 2026 03:38:02 +0000 (0:00:02.046) 0:01:17.588 ********* 2026-03-31 03:38:05.204560 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-31 03:38:05.204573 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-03-31 03:38:05.204586 | orchestrator | 2026-03-31 03:38:05.204599 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-03-31 03:38:05.204618 | orchestrator | Tuesday 31 March 2026 03:38:05 +0000 (0:00:02.287) 0:01:19.876 ********* 2026-03-31 03:39:25.145681 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:39:25.145845 | orchestrator | 2026-03-31 03:39:25.145916 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-31 03:39:25.145939 | orchestrator | Tuesday 31 March 2026 03:38:20 +0000 (0:00:15.402) 0:01:35.278 ********* 2026-03-31 03:39:25.145955 | orchestrator | 2026-03-31 03:39:25.145972 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-31 03:39:25.145988 | orchestrator | Tuesday 31 March 2026 03:38:20 +0000 (0:00:00.071) 0:01:35.349 ********* 2026-03-31 03:39:25.146006 | orchestrator | 2026-03-31 03:39:25.146172 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-31 03:39:25.146198 | orchestrator | Tuesday 31 March 2026 03:38:20 +0000 (0:00:00.071) 0:01:35.420 ********* 2026-03-31 03:39:25.146218 | orchestrator | 2026-03-31 03:39:25.146237 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-03-31 03:39:25.146259 | orchestrator | Tuesday 31 March 2026 03:38:20 +0000 (0:00:00.075) 0:01:35.496 ********* 2026-03-31 03:39:25.146281 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:39:25.146302 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:39:25.146316 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:39:25.146328 | orchestrator | 2026-03-31 03:39:25.146341 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-03-31 03:39:25.146354 | orchestrator | Tuesday 31 March 2026 03:38:33 +0000 (0:00:12.864) 0:01:48.360 ********* 2026-03-31 03:39:25.146366 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:39:25.146378 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:39:25.146390 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:39:25.146402 | orchestrator | 2026-03-31 03:39:25.146414 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-03-31 03:39:25.146426 | orchestrator | Tuesday 31 March 2026 03:38:44 +0000 (0:00:10.863) 0:01:59.224 ********* 2026-03-31 03:39:25.146444 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:39:25.146462 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:39:25.146478 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:39:25.146495 | orchestrator | 2026-03-31 03:39:25.146510 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-03-31 03:39:25.146527 | orchestrator | Tuesday 31 March 2026 03:38:55 +0000 (0:00:10.725) 0:02:09.949 ********* 2026-03-31 03:39:25.146544 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:39:25.146561 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:39:25.146579 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:39:25.146597 | orchestrator | 2026-03-31 03:39:25.146615 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-03-31 03:39:25.146633 | orchestrator | Tuesday 31 March 2026 03:39:00 +0000 (0:00:05.686) 0:02:15.636 ********* 2026-03-31 03:39:25.146651 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:39:25.146668 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:39:25.146686 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:39:25.146705 | orchestrator | 2026-03-31 03:39:25.146723 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-03-31 03:39:25.146739 | orchestrator | Tuesday 31 March 2026 03:39:06 +0000 (0:00:06.010) 0:02:21.646 ********* 2026-03-31 03:39:25.146750 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:39:25.146761 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:39:25.146771 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:39:25.146782 | orchestrator | 2026-03-31 03:39:25.146793 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-03-31 03:39:25.146804 | orchestrator | Tuesday 31 March 2026 03:39:18 +0000 (0:00:11.112) 0:02:32.759 ********* 2026-03-31 03:39:25.146815 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:39:25.146825 | orchestrator | 2026-03-31 03:39:25.146836 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 03:39:25.146848 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-31 03:39:25.146876 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-31 03:39:25.146887 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-31 03:39:25.146898 | orchestrator | 2026-03-31 03:39:25.146909 | orchestrator | 2026-03-31 03:39:25.146921 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 03:39:25.146939 | orchestrator | Tuesday 31 March 2026 03:39:24 +0000 (0:00:06.615) 0:02:39.375 ********* 2026-03-31 03:39:25.146987 | orchestrator | =============================================================================== 2026-03-31 03:39:25.147007 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.40s 2026-03-31 03:39:25.147025 | orchestrator | designate : Copying over designate.conf -------------------------------- 14.65s 2026-03-31 03:39:25.147071 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 12.86s 2026-03-31 03:39:25.147092 | orchestrator | designate : Restart designate-worker container ------------------------- 11.11s 2026-03-31 03:39:25.147110 | orchestrator | designate : Restart designate-api container ---------------------------- 10.86s 2026-03-31 03:39:25.147127 | orchestrator | designate : Restart designate-central container ------------------------ 10.73s 2026-03-31 03:39:25.147145 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 6.62s 2026-03-31 03:39:25.147165 | orchestrator | designate : Copying over config.json files for services ----------------- 6.21s 2026-03-31 03:39:25.147184 | orchestrator | designate : Restart designate-mdns container ---------------------------- 6.01s 2026-03-31 03:39:25.147201 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.00s 2026-03-31 03:39:25.147218 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 5.87s 2026-03-31 03:39:25.147252 | orchestrator | designate : Restart designate-producer container ------------------------ 5.69s 2026-03-31 03:39:25.147263 | orchestrator | designate : Check designate containers ---------------------------------- 4.71s 2026-03-31 03:39:25.147274 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 3.97s 2026-03-31 03:39:25.147285 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.74s 2026-03-31 03:39:25.147295 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.52s 2026-03-31 03:39:25.147306 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.21s 2026-03-31 03:39:25.147317 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.05s 2026-03-31 03:39:25.147327 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 2.97s 2026-03-31 03:39:25.147338 | orchestrator | service-ks-register : designate | Creating projects --------------------- 2.96s 2026-03-31 03:39:27.705815 | orchestrator | 2026-03-31 03:39:27 | INFO  | Task c17443c4-67fe-45a3-a197-9475bd82f86d (octavia) was prepared for execution. 2026-03-31 03:39:27.705914 | orchestrator | 2026-03-31 03:39:27 | INFO  | It takes a moment until task c17443c4-67fe-45a3-a197-9475bd82f86d (octavia) has been started and output is visible here. 2026-03-31 03:41:27.754287 | orchestrator | 2026-03-31 03:41:27.754424 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-31 03:41:27.754451 | orchestrator | 2026-03-31 03:41:27.754468 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-31 03:41:27.754485 | orchestrator | Tuesday 31 March 2026 03:39:32 +0000 (0:00:00.277) 0:00:00.277 ********* 2026-03-31 03:41:27.754501 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:41:27.754519 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:41:27.754536 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:41:27.754552 | orchestrator | 2026-03-31 03:41:27.754567 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-31 03:41:27.754584 | orchestrator | Tuesday 31 March 2026 03:39:32 +0000 (0:00:00.323) 0:00:00.600 ********* 2026-03-31 03:41:27.754631 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-03-31 03:41:27.754649 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-03-31 03:41:27.754665 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-03-31 03:41:27.754680 | orchestrator | 2026-03-31 03:41:27.754696 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-03-31 03:41:27.754711 | orchestrator | 2026-03-31 03:41:27.754726 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-31 03:41:27.754741 | orchestrator | Tuesday 31 March 2026 03:39:33 +0000 (0:00:00.460) 0:00:01.060 ********* 2026-03-31 03:41:27.754756 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 03:41:27.754773 | orchestrator | 2026-03-31 03:41:27.754788 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-03-31 03:41:27.754803 | orchestrator | Tuesday 31 March 2026 03:39:33 +0000 (0:00:00.584) 0:00:01.645 ********* 2026-03-31 03:41:27.754819 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-03-31 03:41:27.754836 | orchestrator | 2026-03-31 03:41:27.754854 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-03-31 03:41:27.754871 | orchestrator | Tuesday 31 March 2026 03:39:36 +0000 (0:00:03.248) 0:00:04.893 ********* 2026-03-31 03:41:27.754887 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-03-31 03:41:27.754903 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-03-31 03:41:27.754921 | orchestrator | 2026-03-31 03:41:27.754937 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-03-31 03:41:27.754954 | orchestrator | Tuesday 31 March 2026 03:39:42 +0000 (0:00:06.084) 0:00:10.978 ********* 2026-03-31 03:41:27.754970 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-31 03:41:27.755020 | orchestrator | 2026-03-31 03:41:27.755038 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-03-31 03:41:27.755054 | orchestrator | Tuesday 31 March 2026 03:39:46 +0000 (0:00:03.092) 0:00:14.070 ********* 2026-03-31 03:41:27.755070 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-31 03:41:27.755107 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-31 03:41:27.755125 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-31 03:41:27.755143 | orchestrator | 2026-03-31 03:41:27.755159 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-03-31 03:41:27.755175 | orchestrator | Tuesday 31 March 2026 03:39:53 +0000 (0:00:07.921) 0:00:21.992 ********* 2026-03-31 03:41:27.755194 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-31 03:41:27.755210 | orchestrator | 2026-03-31 03:41:27.755227 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-03-31 03:41:27.755243 | orchestrator | Tuesday 31 March 2026 03:39:56 +0000 (0:00:03.046) 0:00:25.038 ********* 2026-03-31 03:41:27.755259 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-31 03:41:27.755275 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-31 03:41:27.755292 | orchestrator | 2026-03-31 03:41:27.755308 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-03-31 03:41:27.755324 | orchestrator | Tuesday 31 March 2026 03:40:03 +0000 (0:00:06.736) 0:00:31.775 ********* 2026-03-31 03:41:27.755340 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-03-31 03:41:27.755357 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-03-31 03:41:27.755373 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-03-31 03:41:27.755390 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-03-31 03:41:27.755407 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-03-31 03:41:27.755438 | orchestrator | 2026-03-31 03:41:27.755456 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-31 03:41:27.755472 | orchestrator | Tuesday 31 March 2026 03:40:18 +0000 (0:00:14.620) 0:00:46.395 ********* 2026-03-31 03:41:27.755489 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 03:41:27.755506 | orchestrator | 2026-03-31 03:41:27.755522 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-03-31 03:41:27.755539 | orchestrator | Tuesday 31 March 2026 03:40:19 +0000 (0:00:00.811) 0:00:47.206 ********* 2026-03-31 03:41:27.755555 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:41:27.755573 | orchestrator | 2026-03-31 03:41:27.755590 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-03-31 03:41:27.755606 | orchestrator | Tuesday 31 March 2026 03:40:23 +0000 (0:00:04.751) 0:00:51.958 ********* 2026-03-31 03:41:27.755621 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:41:27.755637 | orchestrator | 2026-03-31 03:41:27.755654 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-31 03:41:27.755697 | orchestrator | Tuesday 31 March 2026 03:40:28 +0000 (0:00:04.551) 0:00:56.509 ********* 2026-03-31 03:41:27.755713 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:41:27.755724 | orchestrator | 2026-03-31 03:41:27.755733 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-03-31 03:41:27.755743 | orchestrator | Tuesday 31 March 2026 03:40:31 +0000 (0:00:02.981) 0:00:59.491 ********* 2026-03-31 03:41:27.755752 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-31 03:41:27.755762 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-31 03:41:27.755771 | orchestrator | 2026-03-31 03:41:27.755780 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-03-31 03:41:27.755790 | orchestrator | Tuesday 31 March 2026 03:40:41 +0000 (0:00:09.905) 0:01:09.396 ********* 2026-03-31 03:41:27.755800 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-03-31 03:41:27.755809 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-03-31 03:41:27.755821 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-03-31 03:41:27.755836 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-03-31 03:41:27.755846 | orchestrator | 2026-03-31 03:41:27.755856 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-03-31 03:41:27.755865 | orchestrator | Tuesday 31 March 2026 03:40:55 +0000 (0:00:14.608) 0:01:24.004 ********* 2026-03-31 03:41:27.755875 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:41:27.755896 | orchestrator | 2026-03-31 03:41:27.755906 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-03-31 03:41:27.755916 | orchestrator | Tuesday 31 March 2026 03:41:00 +0000 (0:00:04.339) 0:01:28.344 ********* 2026-03-31 03:41:27.755925 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:41:27.755935 | orchestrator | 2026-03-31 03:41:27.755944 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-03-31 03:41:27.755954 | orchestrator | Tuesday 31 March 2026 03:41:05 +0000 (0:00:04.834) 0:01:33.178 ********* 2026-03-31 03:41:27.755963 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:41:27.755973 | orchestrator | 2026-03-31 03:41:27.756010 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-03-31 03:41:27.756027 | orchestrator | Tuesday 31 March 2026 03:41:05 +0000 (0:00:00.246) 0:01:33.425 ********* 2026-03-31 03:41:27.756044 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:41:27.756060 | orchestrator | 2026-03-31 03:41:27.756075 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-31 03:41:27.756105 | orchestrator | Tuesday 31 March 2026 03:41:09 +0000 (0:00:04.558) 0:01:37.983 ********* 2026-03-31 03:41:27.756130 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 03:41:27.756144 | orchestrator | 2026-03-31 03:41:27.756154 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-03-31 03:41:27.756164 | orchestrator | Tuesday 31 March 2026 03:41:11 +0000 (0:00:01.195) 0:01:39.178 ********* 2026-03-31 03:41:27.756174 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:41:27.756183 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:41:27.756193 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:41:27.756202 | orchestrator | 2026-03-31 03:41:27.756212 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-03-31 03:41:27.756222 | orchestrator | Tuesday 31 March 2026 03:41:16 +0000 (0:00:05.148) 0:01:44.327 ********* 2026-03-31 03:41:27.756231 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:41:27.756241 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:41:27.756251 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:41:27.756260 | orchestrator | 2026-03-31 03:41:27.756270 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-03-31 03:41:27.756280 | orchestrator | Tuesday 31 March 2026 03:41:20 +0000 (0:00:03.898) 0:01:48.225 ********* 2026-03-31 03:41:27.756290 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:41:27.756299 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:41:27.756309 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:41:27.756319 | orchestrator | 2026-03-31 03:41:27.756328 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-03-31 03:41:27.756338 | orchestrator | Tuesday 31 March 2026 03:41:21 +0000 (0:00:01.069) 0:01:49.295 ********* 2026-03-31 03:41:27.756348 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:41:27.756357 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:41:27.756367 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:41:27.756377 | orchestrator | 2026-03-31 03:41:27.756386 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-03-31 03:41:27.756396 | orchestrator | Tuesday 31 March 2026 03:41:23 +0000 (0:00:01.796) 0:01:51.092 ********* 2026-03-31 03:41:27.756406 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:41:27.756415 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:41:27.756425 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:41:27.756435 | orchestrator | 2026-03-31 03:41:27.756444 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-03-31 03:41:27.756454 | orchestrator | Tuesday 31 March 2026 03:41:24 +0000 (0:00:01.311) 0:01:52.403 ********* 2026-03-31 03:41:27.756464 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:41:27.756473 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:41:27.756483 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:41:27.756492 | orchestrator | 2026-03-31 03:41:27.756502 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-03-31 03:41:27.756512 | orchestrator | Tuesday 31 March 2026 03:41:25 +0000 (0:00:01.194) 0:01:53.597 ********* 2026-03-31 03:41:27.756522 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:41:27.756531 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:41:27.756541 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:41:27.756550 | orchestrator | 2026-03-31 03:41:27.756568 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-03-31 03:41:52.409244 | orchestrator | Tuesday 31 March 2026 03:41:27 +0000 (0:00:02.174) 0:01:55.772 ********* 2026-03-31 03:41:52.409362 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:41:52.409379 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:41:52.409390 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:41:52.409401 | orchestrator | 2026-03-31 03:41:52.409413 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-03-31 03:41:52.409425 | orchestrator | Tuesday 31 March 2026 03:41:29 +0000 (0:00:01.507) 0:01:57.279 ********* 2026-03-31 03:41:52.409460 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:41:52.409472 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:41:52.409483 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:41:52.409494 | orchestrator | 2026-03-31 03:41:52.409505 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-03-31 03:41:52.409516 | orchestrator | Tuesday 31 March 2026 03:41:29 +0000 (0:00:00.630) 0:01:57.910 ********* 2026-03-31 03:41:52.409526 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:41:52.409537 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:41:52.409547 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:41:52.409558 | orchestrator | 2026-03-31 03:41:52.409569 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-31 03:41:52.409579 | orchestrator | Tuesday 31 March 2026 03:41:33 +0000 (0:00:03.223) 0:02:01.133 ********* 2026-03-31 03:41:52.409591 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 03:41:52.409602 | orchestrator | 2026-03-31 03:41:52.409612 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-03-31 03:41:52.409623 | orchestrator | Tuesday 31 March 2026 03:41:33 +0000 (0:00:00.561) 0:02:01.695 ********* 2026-03-31 03:41:52.409634 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:41:52.409644 | orchestrator | 2026-03-31 03:41:52.409655 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-31 03:41:52.409666 | orchestrator | Tuesday 31 March 2026 03:41:37 +0000 (0:00:03.453) 0:02:05.148 ********* 2026-03-31 03:41:52.409676 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:41:52.409687 | orchestrator | 2026-03-31 03:41:52.409697 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-03-31 03:41:52.409708 | orchestrator | Tuesday 31 March 2026 03:41:40 +0000 (0:00:02.992) 0:02:08.141 ********* 2026-03-31 03:41:52.409720 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-31 03:41:52.409731 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-31 03:41:52.409742 | orchestrator | 2026-03-31 03:41:52.409752 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-03-31 03:41:52.409763 | orchestrator | Tuesday 31 March 2026 03:41:46 +0000 (0:00:06.340) 0:02:14.481 ********* 2026-03-31 03:41:52.409776 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:41:52.409788 | orchestrator | 2026-03-31 03:41:52.409800 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-03-31 03:41:52.409812 | orchestrator | Tuesday 31 March 2026 03:41:49 +0000 (0:00:03.320) 0:02:17.801 ********* 2026-03-31 03:41:52.409824 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:41:52.409851 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:41:52.409864 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:41:52.409877 | orchestrator | 2026-03-31 03:41:52.409890 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-03-31 03:41:52.409902 | orchestrator | Tuesday 31 March 2026 03:41:50 +0000 (0:00:00.578) 0:02:18.380 ********* 2026-03-31 03:41:52.409918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-31 03:41:52.409954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-31 03:41:52.410002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-31 03:41:52.410087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-31 03:41:52.410115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-31 03:41:52.410135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-31 03:41:52.410148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-31 03:41:52.410168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-31 03:41:52.410189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-31 03:41:53.920187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-31 03:41:53.920297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-31 03:41:53.920343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-31 03:41:53.920366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:41:53.920418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:41:53.920439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:41:53.920458 | orchestrator | 2026-03-31 03:41:53.920477 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-03-31 03:41:53.920496 | orchestrator | Tuesday 31 March 2026 03:41:52 +0000 (0:00:02.509) 0:02:20.889 ********* 2026-03-31 03:41:53.920515 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:41:53.920536 | orchestrator | 2026-03-31 03:41:53.920555 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-03-31 03:41:53.920573 | orchestrator | Tuesday 31 March 2026 03:41:52 +0000 (0:00:00.136) 0:02:21.026 ********* 2026-03-31 03:41:53.920590 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:41:53.920633 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:41:53.920654 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:41:53.920673 | orchestrator | 2026-03-31 03:41:53.920692 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-03-31 03:41:53.920711 | orchestrator | Tuesday 31 March 2026 03:41:53 +0000 (0:00:00.338) 0:02:21.364 ********* 2026-03-31 03:41:53.920732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-31 03:41:53.920763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-31 03:41:53.920784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-31 03:41:53.920819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-31 03:41:53.920838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-31 03:41:53.920871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-31 03:41:58.930731 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:41:58.930883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-31 03:41:58.930919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-31 03:41:58.930963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-31 03:41:58.931054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-31 03:41:58.931077 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:41:58.931132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-31 03:41:58.931146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-31 03:41:58.931181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-31 03:41:58.931194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-31 03:41:58.931214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-31 03:41:58.931238 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:41:58.931251 | orchestrator | 2026-03-31 03:41:58.931265 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-31 03:41:58.931280 | orchestrator | Tuesday 31 March 2026 03:41:54 +0000 (0:00:00.694) 0:02:22.059 ********* 2026-03-31 03:41:58.931292 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 03:41:58.931305 | orchestrator | 2026-03-31 03:41:58.931317 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-03-31 03:41:58.931330 | orchestrator | Tuesday 31 March 2026 03:41:54 +0000 (0:00:00.816) 0:02:22.876 ********* 2026-03-31 03:41:58.931344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-31 03:41:58.931358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-31 03:41:58.931383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-31 03:42:00.453668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-31 03:42:00.453774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-31 03:42:00.453790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-31 03:42:00.453804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-31 03:42:00.453817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-31 03:42:00.453828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-31 03:42:00.453857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-31 03:42:00.453898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-31 03:42:00.453916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-31 03:42:00.453941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:42:00.454105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:42:00.454132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:42:00.454152 | orchestrator | 2026-03-31 03:42:00.454173 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-03-31 03:42:00.454196 | orchestrator | Tuesday 31 March 2026 03:41:59 +0000 (0:00:05.016) 0:02:27.893 ********* 2026-03-31 03:42:00.454236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-31 03:42:00.552242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-31 03:42:00.552341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-31 03:42:00.552355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-31 03:42:00.552368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-31 03:42:00.552380 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:42:00.552393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-31 03:42:00.552427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-31 03:42:00.552461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-31 03:42:00.552473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-31 03:42:00.552484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-31 03:42:00.552494 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:42:00.552504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-31 03:42:00.552514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-31 03:42:00.552531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-31 03:42:00.552553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-31 03:42:01.383350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-31 03:42:01.383429 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:42:01.383439 | orchestrator | 2026-03-31 03:42:01.383446 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-03-31 03:42:01.383453 | orchestrator | Tuesday 31 March 2026 03:42:00 +0000 (0:00:00.693) 0:02:28.586 ********* 2026-03-31 03:42:01.383460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-31 03:42:01.383468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-31 03:42:01.383479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-31 03:42:01.383519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-31 03:42:01.383558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-31 03:42:01.383570 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:42:01.383576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-31 03:42:01.383582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-31 03:42:01.383588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-31 03:42:01.383594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-31 03:42:01.383605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-31 03:42:01.383611 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:42:01.383625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-31 03:42:05.988327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-31 03:42:05.988464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-31 03:42:05.988492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-31 03:42:05.988536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-31 03:42:05.988550 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:42:05.988564 | orchestrator | 2026-03-31 03:42:05.988576 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-03-31 03:42:05.988589 | orchestrator | Tuesday 31 March 2026 03:42:01 +0000 (0:00:01.349) 0:02:29.936 ********* 2026-03-31 03:42:05.988602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-31 03:42:05.988664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-31 03:42:05.988687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-31 03:42:05.988704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-31 03:42:05.988739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-31 03:42:05.988759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-31 03:42:05.988779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-31 03:42:05.988817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-31 03:42:23.591457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-31 03:42:23.591568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-31 03:42:23.591601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-31 03:42:23.591615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-31 03:42:23.591626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:42:23.591654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:42:23.591686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:42:23.591696 | orchestrator | 2026-03-31 03:42:23.591703 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-03-31 03:42:23.591711 | orchestrator | Tuesday 31 March 2026 03:42:06 +0000 (0:00:05.076) 0:02:35.012 ********* 2026-03-31 03:42:23.591720 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-31 03:42:23.591732 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-31 03:42:23.591743 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-31 03:42:23.591752 | orchestrator | 2026-03-31 03:42:23.591763 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-03-31 03:42:23.591773 | orchestrator | Tuesday 31 March 2026 03:42:08 +0000 (0:00:01.758) 0:02:36.770 ********* 2026-03-31 03:42:23.591793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-31 03:42:23.591808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-31 03:42:23.591820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-31 03:42:23.591835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-31 03:42:39.434512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-31 03:42:39.434626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-31 03:42:39.434677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-31 03:42:39.434693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-31 03:42:39.434704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-31 03:42:39.434731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-31 03:42:39.434764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-31 03:42:39.434776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-31 03:42:39.434797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:42:39.434809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:42:39.434820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:42:39.434832 | orchestrator | 2026-03-31 03:42:39.434846 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-03-31 03:42:39.434858 | orchestrator | Tuesday 31 March 2026 03:42:27 +0000 (0:00:18.362) 0:02:55.133 ********* 2026-03-31 03:42:39.434869 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:42:39.434881 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:42:39.434892 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:42:39.434903 | orchestrator | 2026-03-31 03:42:39.434914 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-03-31 03:42:39.434925 | orchestrator | Tuesday 31 March 2026 03:42:28 +0000 (0:00:01.823) 0:02:56.957 ********* 2026-03-31 03:42:39.434936 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-31 03:42:39.434946 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-31 03:42:39.435031 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-31 03:42:39.435044 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-31 03:42:39.435057 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-31 03:42:39.435070 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-31 03:42:39.435082 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-31 03:42:39.435101 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-31 03:42:39.435114 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-31 03:42:39.435127 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-31 03:42:39.435140 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-31 03:42:39.435152 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-31 03:42:39.435172 | orchestrator | 2026-03-31 03:42:39.435185 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-03-31 03:42:39.435198 | orchestrator | Tuesday 31 March 2026 03:42:34 +0000 (0:00:05.168) 0:03:02.126 ********* 2026-03-31 03:42:39.435210 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-31 03:42:39.435223 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-31 03:42:39.435244 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-31 03:42:48.198662 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-31 03:42:48.198773 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-31 03:42:48.198788 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-31 03:42:48.198799 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-31 03:42:48.198811 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-31 03:42:48.198822 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-31 03:42:48.198833 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-31 03:42:48.198844 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-31 03:42:48.198855 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-31 03:42:48.198866 | orchestrator | 2026-03-31 03:42:48.198878 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-03-31 03:42:48.198890 | orchestrator | Tuesday 31 March 2026 03:42:39 +0000 (0:00:05.336) 0:03:07.463 ********* 2026-03-31 03:42:48.198901 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-31 03:42:48.198912 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-31 03:42:48.198923 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-31 03:42:48.198934 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-31 03:42:48.199015 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-31 03:42:48.199029 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-31 03:42:48.199040 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-31 03:42:48.199051 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-31 03:42:48.199062 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-31 03:42:48.199073 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-31 03:42:48.199083 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-31 03:42:48.199094 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-31 03:42:48.199105 | orchestrator | 2026-03-31 03:42:48.199116 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-03-31 03:42:48.199128 | orchestrator | Tuesday 31 March 2026 03:42:44 +0000 (0:00:05.373) 0:03:12.836 ********* 2026-03-31 03:42:48.199143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-31 03:42:48.199178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-31 03:42:48.199247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-31 03:42:48.199268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-31 03:42:48.199284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-31 03:42:48.199296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-31 03:42:48.199311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-31 03:42:48.199343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-31 03:42:48.199362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-31 03:42:48.199384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-31 03:44:15.247193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-31 03:44:15.247268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-31 03:44:15.247276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:44:15.247281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:44:15.247310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-31 03:44:15.247315 | orchestrator | 2026-03-31 03:44:15.247320 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-31 03:44:15.247325 | orchestrator | Tuesday 31 March 2026 03:42:49 +0000 (0:00:04.311) 0:03:17.148 ********* 2026-03-31 03:44:15.247329 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:44:15.247334 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:44:15.247338 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:44:15.247341 | orchestrator | 2026-03-31 03:44:15.247345 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-03-31 03:44:15.247349 | orchestrator | Tuesday 31 March 2026 03:42:49 +0000 (0:00:00.307) 0:03:17.455 ********* 2026-03-31 03:44:15.247353 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:44:15.247356 | orchestrator | 2026-03-31 03:44:15.247360 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-03-31 03:44:15.247364 | orchestrator | Tuesday 31 March 2026 03:42:51 +0000 (0:00:01.962) 0:03:19.418 ********* 2026-03-31 03:44:15.247367 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:44:15.247371 | orchestrator | 2026-03-31 03:44:15.247375 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-03-31 03:44:15.247379 | orchestrator | Tuesday 31 March 2026 03:42:53 +0000 (0:00:02.007) 0:03:21.425 ********* 2026-03-31 03:44:15.247382 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:44:15.247386 | orchestrator | 2026-03-31 03:44:15.247390 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-03-31 03:44:15.247394 | orchestrator | Tuesday 31 March 2026 03:42:55 +0000 (0:00:02.104) 0:03:23.530 ********* 2026-03-31 03:44:15.247407 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:44:15.247412 | orchestrator | 2026-03-31 03:44:15.247415 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-03-31 03:44:15.247419 | orchestrator | Tuesday 31 March 2026 03:42:57 +0000 (0:00:02.088) 0:03:25.619 ********* 2026-03-31 03:44:15.247423 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:44:15.247426 | orchestrator | 2026-03-31 03:44:15.247430 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-31 03:44:15.247434 | orchestrator | Tuesday 31 March 2026 03:43:18 +0000 (0:00:21.087) 0:03:46.706 ********* 2026-03-31 03:44:15.247438 | orchestrator | 2026-03-31 03:44:15.247441 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-31 03:44:15.247445 | orchestrator | Tuesday 31 March 2026 03:43:18 +0000 (0:00:00.075) 0:03:46.782 ********* 2026-03-31 03:44:15.247449 | orchestrator | 2026-03-31 03:44:15.247452 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-31 03:44:15.247456 | orchestrator | Tuesday 31 March 2026 03:43:18 +0000 (0:00:00.072) 0:03:46.855 ********* 2026-03-31 03:44:15.247460 | orchestrator | 2026-03-31 03:44:15.247463 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-03-31 03:44:15.247471 | orchestrator | Tuesday 31 March 2026 03:43:18 +0000 (0:00:00.075) 0:03:46.930 ********* 2026-03-31 03:44:15.247475 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:44:15.247478 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:44:15.247482 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:44:15.247486 | orchestrator | 2026-03-31 03:44:15.247489 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-03-31 03:44:15.247493 | orchestrator | Tuesday 31 March 2026 03:43:34 +0000 (0:00:15.782) 0:04:02.713 ********* 2026-03-31 03:44:15.247497 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:44:15.247500 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:44:15.247504 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:44:15.247508 | orchestrator | 2026-03-31 03:44:15.247512 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-03-31 03:44:15.247515 | orchestrator | Tuesday 31 March 2026 03:43:45 +0000 (0:00:10.946) 0:04:13.659 ********* 2026-03-31 03:44:15.247519 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:44:15.247523 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:44:15.247526 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:44:15.247530 | orchestrator | 2026-03-31 03:44:15.247534 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-03-31 03:44:15.247538 | orchestrator | Tuesday 31 March 2026 03:43:53 +0000 (0:00:08.313) 0:04:21.973 ********* 2026-03-31 03:44:15.247541 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:44:15.247545 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:44:15.247549 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:44:15.247552 | orchestrator | 2026-03-31 03:44:15.247556 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-03-31 03:44:15.247560 | orchestrator | Tuesday 31 March 2026 03:44:03 +0000 (0:00:10.065) 0:04:32.039 ********* 2026-03-31 03:44:15.247563 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:44:15.247567 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:44:15.247571 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:44:15.247575 | orchestrator | 2026-03-31 03:44:15.247578 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 03:44:15.247583 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-31 03:44:15.247589 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-31 03:44:15.247593 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-31 03:44:15.247596 | orchestrator | 2026-03-31 03:44:15.247600 | orchestrator | 2026-03-31 03:44:15.247604 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 03:44:15.247608 | orchestrator | Tuesday 31 March 2026 03:44:15 +0000 (0:00:11.212) 0:04:43.251 ********* 2026-03-31 03:44:15.247612 | orchestrator | =============================================================================== 2026-03-31 03:44:15.247618 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 21.09s 2026-03-31 03:44:15.247622 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 18.36s 2026-03-31 03:44:15.247626 | orchestrator | octavia : Restart octavia-api container -------------------------------- 15.78s 2026-03-31 03:44:15.247630 | orchestrator | octavia : Adding octavia related roles --------------------------------- 14.62s 2026-03-31 03:44:15.247633 | orchestrator | octavia : Add rules for security groups -------------------------------- 14.61s 2026-03-31 03:44:15.247637 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 11.21s 2026-03-31 03:44:15.247641 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 10.95s 2026-03-31 03:44:15.247644 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.07s 2026-03-31 03:44:15.247651 | orchestrator | octavia : Create security groups for octavia ---------------------------- 9.91s 2026-03-31 03:44:15.247655 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 8.31s 2026-03-31 03:44:15.247659 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 7.92s 2026-03-31 03:44:15.247662 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 6.74s 2026-03-31 03:44:15.247666 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.34s 2026-03-31 03:44:15.247670 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.08s 2026-03-31 03:44:15.247676 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.37s 2026-03-31 03:44:15.713628 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.34s 2026-03-31 03:44:15.713750 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.17s 2026-03-31 03:44:15.713774 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.15s 2026-03-31 03:44:15.713789 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.08s 2026-03-31 03:44:15.713799 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.02s 2026-03-31 03:44:18.505667 | orchestrator | 2026-03-31 03:44:18 | INFO  | Task ecfbc9f9-7c3d-40b7-bc25-9b61e43be69f (ceilometer) was prepared for execution. 2026-03-31 03:44:18.505779 | orchestrator | 2026-03-31 03:44:18 | INFO  | It takes a moment until task ecfbc9f9-7c3d-40b7-bc25-9b61e43be69f (ceilometer) has been started and output is visible here. 2026-03-31 03:44:42.028814 | orchestrator | 2026-03-31 03:44:42.028947 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-31 03:44:42.028959 | orchestrator | 2026-03-31 03:44:42.028967 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-31 03:44:42.028974 | orchestrator | Tuesday 31 March 2026 03:44:22 +0000 (0:00:00.268) 0:00:00.268 ********* 2026-03-31 03:44:42.028980 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:44:42.028988 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:44:42.028994 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:44:42.029000 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:44:42.029006 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:44:42.029012 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:44:42.029017 | orchestrator | 2026-03-31 03:44:42.029023 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-31 03:44:42.029029 | orchestrator | Tuesday 31 March 2026 03:44:23 +0000 (0:00:00.754) 0:00:01.023 ********* 2026-03-31 03:44:42.029036 | orchestrator | ok: [testbed-node-0] => (item=enable_ceilometer_True) 2026-03-31 03:44:42.029042 | orchestrator | ok: [testbed-node-1] => (item=enable_ceilometer_True) 2026-03-31 03:44:42.029048 | orchestrator | ok: [testbed-node-2] => (item=enable_ceilometer_True) 2026-03-31 03:44:42.029054 | orchestrator | ok: [testbed-node-3] => (item=enable_ceilometer_True) 2026-03-31 03:44:42.029060 | orchestrator | ok: [testbed-node-4] => (item=enable_ceilometer_True) 2026-03-31 03:44:42.029065 | orchestrator | ok: [testbed-node-5] => (item=enable_ceilometer_True) 2026-03-31 03:44:42.029071 | orchestrator | 2026-03-31 03:44:42.029077 | orchestrator | PLAY [Apply role ceilometer] *************************************************** 2026-03-31 03:44:42.029083 | orchestrator | 2026-03-31 03:44:42.029089 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-03-31 03:44:42.029095 | orchestrator | Tuesday 31 March 2026 03:44:24 +0000 (0:00:00.728) 0:00:01.751 ********* 2026-03-31 03:44:42.029102 | orchestrator | included: /ansible/roles/ceilometer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 03:44:42.029109 | orchestrator | 2026-03-31 03:44:42.029115 | orchestrator | TASK [service-ks-register : ceilometer | Creating services] ******************** 2026-03-31 03:44:42.029121 | orchestrator | Tuesday 31 March 2026 03:44:25 +0000 (0:00:01.350) 0:00:03.102 ********* 2026-03-31 03:44:42.029146 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:44:42.029152 | orchestrator | 2026-03-31 03:44:42.029158 | orchestrator | TASK [service-ks-register : ceilometer | Creating endpoints] ******************* 2026-03-31 03:44:42.029164 | orchestrator | Tuesday 31 March 2026 03:44:25 +0000 (0:00:00.140) 0:00:03.242 ********* 2026-03-31 03:44:42.029169 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:44:42.029175 | orchestrator | 2026-03-31 03:44:42.029181 | orchestrator | TASK [service-ks-register : ceilometer | Creating projects] ******************** 2026-03-31 03:44:42.029187 | orchestrator | Tuesday 31 March 2026 03:44:25 +0000 (0:00:00.140) 0:00:03.382 ********* 2026-03-31 03:44:42.029192 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-31 03:44:42.029198 | orchestrator | 2026-03-31 03:44:42.029204 | orchestrator | TASK [service-ks-register : ceilometer | Creating users] *********************** 2026-03-31 03:44:42.029210 | orchestrator | Tuesday 31 March 2026 03:44:29 +0000 (0:00:03.526) 0:00:06.909 ********* 2026-03-31 03:44:42.029229 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-31 03:44:42.029234 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service) 2026-03-31 03:44:42.029240 | orchestrator | 2026-03-31 03:44:42.029246 | orchestrator | TASK [service-ks-register : ceilometer | Creating roles] *********************** 2026-03-31 03:44:42.029252 | orchestrator | Tuesday 31 March 2026 03:44:33 +0000 (0:00:03.770) 0:00:10.680 ********* 2026-03-31 03:44:42.029258 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-31 03:44:42.029263 | orchestrator | 2026-03-31 03:44:42.029269 | orchestrator | TASK [service-ks-register : ceilometer | Granting user roles] ****************** 2026-03-31 03:44:42.029275 | orchestrator | Tuesday 31 March 2026 03:44:36 +0000 (0:00:03.065) 0:00:13.746 ********* 2026-03-31 03:44:42.029281 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service -> admin) 2026-03-31 03:44:42.029286 | orchestrator | 2026-03-31 03:44:42.029292 | orchestrator | TASK [ceilometer : Associate the ResellerAdmin role and ceilometer user] ******* 2026-03-31 03:44:42.029298 | orchestrator | Tuesday 31 March 2026 03:44:40 +0000 (0:00:03.842) 0:00:17.588 ********* 2026-03-31 03:44:42.029304 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:44:42.029310 | orchestrator | 2026-03-31 03:44:42.029316 | orchestrator | TASK [ceilometer : Ensuring config directories exist] ************************** 2026-03-31 03:44:42.029322 | orchestrator | Tuesday 31 March 2026 03:44:40 +0000 (0:00:00.154) 0:00:17.742 ********* 2026-03-31 03:44:42.029330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-31 03:44:42.029352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-31 03:44:42.029359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-31 03:44:42.029372 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-31 03:44:42.029384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-31 03:44:42.029390 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-31 03:44:42.029396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-31 03:44:42.029408 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-31 03:44:47.405061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-31 03:44:47.405191 | orchestrator | 2026-03-31 03:44:47.405207 | orchestrator | TASK [ceilometer : Check if the folder for custom meter definitions exist] ***** 2026-03-31 03:44:47.405218 | orchestrator | Tuesday 31 March 2026 03:44:42 +0000 (0:00:01.694) 0:00:19.437 ********* 2026-03-31 03:44:47.405226 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-31 03:44:47.405235 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-31 03:44:47.405242 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-31 03:44:47.405250 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-31 03:44:47.405258 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-31 03:44:47.405267 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-31 03:44:47.405274 | orchestrator | 2026-03-31 03:44:47.405282 | orchestrator | TASK [ceilometer : Set variable that indicates if we have a folder for custom meter YAML files] *** 2026-03-31 03:44:47.405291 | orchestrator | Tuesday 31 March 2026 03:44:43 +0000 (0:00:01.895) 0:00:21.333 ********* 2026-03-31 03:44:47.405299 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:44:47.405309 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:44:47.405317 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:44:47.405338 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:44:47.405348 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:44:47.405356 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:44:47.405365 | orchestrator | 2026-03-31 03:44:47.405373 | orchestrator | TASK [ceilometer : Find all *.yaml files in custom meter definitions folder (if the folder exist)] *** 2026-03-31 03:44:47.405382 | orchestrator | Tuesday 31 March 2026 03:44:44 +0000 (0:00:00.635) 0:00:21.968 ********* 2026-03-31 03:44:47.405391 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:44:47.405400 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:44:47.405409 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:44:47.405417 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:44:47.405426 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:44:47.405434 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:44:47.405442 | orchestrator | 2026-03-31 03:44:47.405450 | orchestrator | TASK [ceilometer : Set the variable that control the copy of custom meter definitions] *** 2026-03-31 03:44:47.405460 | orchestrator | Tuesday 31 March 2026 03:44:45 +0000 (0:00:00.913) 0:00:22.881 ********* 2026-03-31 03:44:47.405468 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:44:47.405475 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:44:47.405482 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:44:47.405490 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:44:47.405497 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:44:47.405505 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:44:47.405512 | orchestrator | 2026-03-31 03:44:47.405564 | orchestrator | TASK [ceilometer : Create default folder for custom meter definitions] ********* 2026-03-31 03:44:47.405574 | orchestrator | Tuesday 31 March 2026 03:44:46 +0000 (0:00:00.662) 0:00:23.544 ********* 2026-03-31 03:44:47.405586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-31 03:44:47.405600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-31 03:44:47.405621 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:44:47.405653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-31 03:44:47.405665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-31 03:44:47.405674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-31 03:44:47.405688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-31 03:44:47.405698 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:44:47.405708 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-31 03:44:47.405718 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:44:47.405733 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:44:47.405744 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-31 03:44:47.405753 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:44:47.405770 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-31 03:44:52.265064 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:44:52.265210 | orchestrator | 2026-03-31 03:44:52.265236 | orchestrator | TASK [ceilometer : Copying custom meter definitions to Ceilometer] ************* 2026-03-31 03:44:52.265257 | orchestrator | Tuesday 31 March 2026 03:44:47 +0000 (0:00:01.281) 0:00:24.825 ********* 2026-03-31 03:44:52.265290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-31 03:44:52.265314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-31 03:44:52.265331 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:44:52.265358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-31 03:44:52.265368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-31 03:44:52.265402 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:44:52.265412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-31 03:44:52.265423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-31 03:44:52.265433 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:44:52.265461 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-31 03:44:52.265473 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:44:52.265483 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-31 03:44:52.265492 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:44:52.265507 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-31 03:44:52.265534 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:44:52.265545 | orchestrator | 2026-03-31 03:44:52.265558 | orchestrator | TASK [ceilometer : Check if the folder ["/opt/configuration/environments/kolla/files/overlays/ceilometer/pollsters.d"] for dynamic pollsters definitions exist] *** 2026-03-31 03:44:52.265571 | orchestrator | Tuesday 31 March 2026 03:44:48 +0000 (0:00:00.871) 0:00:25.697 ********* 2026-03-31 03:44:52.265582 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-31 03:44:52.265593 | orchestrator | 2026-03-31 03:44:52.265604 | orchestrator | TASK [ceilometer : Set the variable that control the copy of dynamic pollsters definitions] *** 2026-03-31 03:44:52.265616 | orchestrator | Tuesday 31 March 2026 03:44:49 +0000 (0:00:00.769) 0:00:26.466 ********* 2026-03-31 03:44:52.265627 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:44:52.265639 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:44:52.265650 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:44:52.265661 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:44:52.265672 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:44:52.265682 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:44:52.265693 | orchestrator | 2026-03-31 03:44:52.265704 | orchestrator | TASK [ceilometer : Clean default folder for dynamic pollsters definitions] ***** 2026-03-31 03:44:52.265716 | orchestrator | Tuesday 31 March 2026 03:44:49 +0000 (0:00:00.823) 0:00:27.289 ********* 2026-03-31 03:44:52.265727 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:44:52.265737 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:44:52.265748 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:44:52.265759 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:44:52.265770 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:44:52.265781 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:44:52.265791 | orchestrator | 2026-03-31 03:44:52.265802 | orchestrator | TASK [ceilometer : Create default folder for dynamic pollsters definitions] **** 2026-03-31 03:44:52.265813 | orchestrator | Tuesday 31 March 2026 03:44:50 +0000 (0:00:00.924) 0:00:28.213 ********* 2026-03-31 03:44:52.265824 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:44:52.265835 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:44:52.265846 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:44:52.265856 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:44:52.265867 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:44:52.265879 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:44:52.265889 | orchestrator | 2026-03-31 03:44:52.265944 | orchestrator | TASK [ceilometer : Copying dynamic pollsters definitions] ********************** 2026-03-31 03:44:52.265956 | orchestrator | Tuesday 31 March 2026 03:44:51 +0000 (0:00:00.875) 0:00:29.089 ********* 2026-03-31 03:44:52.265965 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:44:52.265975 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:44:52.265984 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:44:52.265994 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:44:52.266003 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:44:52.266013 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:44:52.266059 | orchestrator | 2026-03-31 03:44:57.719216 | orchestrator | TASK [ceilometer : Check if custom polling.yaml exists] ************************ 2026-03-31 03:44:57.719339 | orchestrator | Tuesday 31 March 2026 03:44:52 +0000 (0:00:00.602) 0:00:29.692 ********* 2026-03-31 03:44:57.719359 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-31 03:44:57.719376 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-31 03:44:57.719389 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-31 03:44:57.719404 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-31 03:44:57.719417 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-31 03:44:57.719429 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-31 03:44:57.719438 | orchestrator | 2026-03-31 03:44:57.719469 | orchestrator | TASK [ceilometer : Copying over polling.yaml] ********************************** 2026-03-31 03:44:57.719478 | orchestrator | Tuesday 31 March 2026 03:44:53 +0000 (0:00:01.534) 0:00:31.226 ********* 2026-03-31 03:44:57.719489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-31 03:44:57.719515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-31 03:44:57.719525 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:44:57.719534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-31 03:44:57.719543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-31 03:44:57.719551 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:44:57.719560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-31 03:44:57.719609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-31 03:44:57.719626 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:44:57.719635 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-31 03:44:57.719644 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:44:57.719656 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-31 03:44:57.719665 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:44:57.719673 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-31 03:44:57.719681 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:44:57.719689 | orchestrator | 2026-03-31 03:44:57.719698 | orchestrator | TASK [ceilometer : Set ceilometer polling file's path] ************************* 2026-03-31 03:44:57.719706 | orchestrator | Tuesday 31 March 2026 03:44:54 +0000 (0:00:00.882) 0:00:32.109 ********* 2026-03-31 03:44:57.719714 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:44:57.719722 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:44:57.719730 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:44:57.719737 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:44:57.719746 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:44:57.719756 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:44:57.719765 | orchestrator | 2026-03-31 03:44:57.719773 | orchestrator | TASK [ceilometer : Check custom gnocchi_resources.yaml exists] ***************** 2026-03-31 03:44:57.719782 | orchestrator | Tuesday 31 March 2026 03:44:55 +0000 (0:00:00.899) 0:00:33.009 ********* 2026-03-31 03:44:57.719791 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-31 03:44:57.719800 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-31 03:44:57.719809 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-31 03:44:57.719818 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-31 03:44:57.719827 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-31 03:44:57.719836 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-31 03:44:57.719851 | orchestrator | 2026-03-31 03:44:57.719860 | orchestrator | TASK [ceilometer : Copying over gnocchi_resources.yaml] ************************ 2026-03-31 03:44:57.719870 | orchestrator | Tuesday 31 March 2026 03:44:57 +0000 (0:00:01.513) 0:00:34.522 ********* 2026-03-31 03:44:57.719887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-31 03:45:03.676232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-31 03:45:03.676398 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:45:03.676428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-31 03:45:03.676484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-31 03:45:03.676504 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:45:03.676523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-31 03:45:03.676543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-31 03:45:03.676593 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:45:03.676614 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-31 03:45:03.676633 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:45:03.676680 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-31 03:45:03.676701 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:45:03.676721 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-31 03:45:03.676733 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:45:03.676747 | orchestrator | 2026-03-31 03:45:03.676760 | orchestrator | TASK [ceilometer : Set ceilometer gnocchi_resources file's path] *************** 2026-03-31 03:45:03.676773 | orchestrator | Tuesday 31 March 2026 03:44:58 +0000 (0:00:01.241) 0:00:35.763 ********* 2026-03-31 03:45:03.676787 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:45:03.676799 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:45:03.676812 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:45:03.676824 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:45:03.676836 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:45:03.676848 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:45:03.676861 | orchestrator | 2026-03-31 03:45:03.676873 | orchestrator | TASK [ceilometer : Check if policies shall be overwritten] ********************* 2026-03-31 03:45:03.676886 | orchestrator | Tuesday 31 March 2026 03:44:59 +0000 (0:00:00.855) 0:00:36.619 ********* 2026-03-31 03:45:03.676927 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:45:03.676942 | orchestrator | 2026-03-31 03:45:03.676955 | orchestrator | TASK [ceilometer : Set ceilometer policy file] ********************************* 2026-03-31 03:45:03.676968 | orchestrator | Tuesday 31 March 2026 03:44:59 +0000 (0:00:00.148) 0:00:36.767 ********* 2026-03-31 03:45:03.676980 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:45:03.676992 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:45:03.677015 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:45:03.677027 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:45:03.677040 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:45:03.677052 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:45:03.677064 | orchestrator | 2026-03-31 03:45:03.677076 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-03-31 03:45:03.677089 | orchestrator | Tuesday 31 March 2026 03:44:59 +0000 (0:00:00.658) 0:00:37.426 ********* 2026-03-31 03:45:03.677103 | orchestrator | included: /ansible/roles/ceilometer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 03:45:03.677115 | orchestrator | 2026-03-31 03:45:03.677126 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over extra CA certificates] ***** 2026-03-31 03:45:03.677137 | orchestrator | Tuesday 31 March 2026 03:45:01 +0000 (0:00:01.374) 0:00:38.800 ********* 2026-03-31 03:45:03.677148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-31 03:45:03.677170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-31 03:45:04.225489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-31 03:45:04.225610 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-31 03:45:04.225630 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-31 03:45:04.225664 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-31 03:45:04.225677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-31 03:45:04.225691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-31 03:45:04.225722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-31 03:45:04.225735 | orchestrator | 2026-03-31 03:45:04.225747 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS certificate] *** 2026-03-31 03:45:04.225760 | orchestrator | Tuesday 31 March 2026 03:45:03 +0000 (0:00:02.287) 0:00:41.088 ********* 2026-03-31 03:45:04.225778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-31 03:45:04.225809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-31 03:45:04.225840 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:45:04.225853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-31 03:45:04.225864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-31 03:45:04.225876 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:45:04.225887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-31 03:45:04.225966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-31 03:45:06.262415 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:45:06.262517 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-31 03:45:06.262577 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:45:06.262590 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-31 03:45:06.262602 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:45:06.262613 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-31 03:45:06.262623 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:45:06.262633 | orchestrator | 2026-03-31 03:45:06.262644 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS key] *** 2026-03-31 03:45:06.262656 | orchestrator | Tuesday 31 March 2026 03:45:04 +0000 (0:00:00.919) 0:00:42.007 ********* 2026-03-31 03:45:06.262668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-31 03:45:06.262680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-31 03:45:06.262713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-31 03:45:06.262730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-31 03:45:06.262749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-31 03:45:06.262761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-31 03:45:06.262772 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:45:06.262783 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:45:06.262793 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:45:06.262803 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-31 03:45:06.262813 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:45:06.262824 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-31 03:45:06.262834 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:45:06.262854 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-31 03:45:14.093642 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:45:14.093740 | orchestrator | 2026-03-31 03:45:14.093750 | orchestrator | TASK [ceilometer : Copying over config.json files for services] **************** 2026-03-31 03:45:14.093758 | orchestrator | Tuesday 31 March 2026 03:45:06 +0000 (0:00:01.660) 0:00:43.668 ********* 2026-03-31 03:45:14.093778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-31 03:45:14.093786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-31 03:45:14.093792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-31 03:45:14.093798 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-31 03:45:14.093805 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-31 03:45:14.093842 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-31 03:45:14.093852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-31 03:45:14.093859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-31 03:45:14.093865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-31 03:45:14.093870 | orchestrator | 2026-03-31 03:45:14.093884 | orchestrator | TASK [ceilometer : Copying over ceilometer.conf] ******************************* 2026-03-31 03:45:14.093953 | orchestrator | Tuesday 31 March 2026 03:45:08 +0000 (0:00:02.584) 0:00:46.252 ********* 2026-03-31 03:45:14.093961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-31 03:45:14.093967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-31 03:45:14.093983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-31 03:45:23.855547 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-31 03:45:23.855646 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-31 03:45:23.855659 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-31 03:45:23.855672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-31 03:45:23.855689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-31 03:45:23.855737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-31 03:45:23.855753 | orchestrator | 2026-03-31 03:45:23.855768 | orchestrator | TASK [ceilometer : Check custom event_definitions.yaml exists] ***************** 2026-03-31 03:45:23.855804 | orchestrator | Tuesday 31 March 2026 03:45:14 +0000 (0:00:05.264) 0:00:51.517 ********* 2026-03-31 03:45:23.855821 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-31 03:45:23.855835 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-31 03:45:23.855843 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-31 03:45:23.855851 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-31 03:45:23.855865 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-31 03:45:23.855873 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-31 03:45:23.855881 | orchestrator | 2026-03-31 03:45:23.855889 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml] ************************ 2026-03-31 03:45:23.855992 | orchestrator | Tuesday 31 March 2026 03:45:15 +0000 (0:00:01.709) 0:00:53.227 ********* 2026-03-31 03:45:23.856002 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:45:23.856009 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:45:23.856017 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:45:23.856024 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:45:23.856032 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:45:23.856040 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:45:23.856047 | orchestrator | 2026-03-31 03:45:23.856055 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml for notification service] *** 2026-03-31 03:45:23.856064 | orchestrator | Tuesday 31 March 2026 03:45:16 +0000 (0:00:00.603) 0:00:53.831 ********* 2026-03-31 03:45:23.856071 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:45:23.856079 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:45:23.856087 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:45:23.856095 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:45:23.856103 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:45:23.856110 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:45:23.856118 | orchestrator | 2026-03-31 03:45:23.856126 | orchestrator | TASK [ceilometer : Copying over event_pipeline.yaml] *************************** 2026-03-31 03:45:23.856134 | orchestrator | Tuesday 31 March 2026 03:45:18 +0000 (0:00:01.653) 0:00:55.484 ********* 2026-03-31 03:45:23.856142 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:45:23.856149 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:45:23.856157 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:45:23.856165 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:45:23.856172 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:45:23.856180 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:45:23.856187 | orchestrator | 2026-03-31 03:45:23.856195 | orchestrator | TASK [ceilometer : Check custom pipeline.yaml exists] ************************** 2026-03-31 03:45:23.856203 | orchestrator | Tuesday 31 March 2026 03:45:19 +0000 (0:00:01.345) 0:00:56.830 ********* 2026-03-31 03:45:23.856211 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-31 03:45:23.856227 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-31 03:45:23.856235 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-31 03:45:23.856243 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-31 03:45:23.856251 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-31 03:45:23.856259 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-31 03:45:23.856266 | orchestrator | 2026-03-31 03:45:23.856274 | orchestrator | TASK [ceilometer : Copying over custom pipeline.yaml file] ********************* 2026-03-31 03:45:23.856282 | orchestrator | Tuesday 31 March 2026 03:45:21 +0000 (0:00:01.838) 0:00:58.669 ********* 2026-03-31 03:45:23.856291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-31 03:45:23.856301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-31 03:45:23.856310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-31 03:45:23.856331 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-31 03:45:24.797003 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-31 03:45:24.797189 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-31 03:45:24.797221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-31 03:45:24.797243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-31 03:45:24.797262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-31 03:45:24.797282 | orchestrator | 2026-03-31 03:45:24.797303 | orchestrator | TASK [ceilometer : Copying over pipeline.yaml file] **************************** 2026-03-31 03:45:24.797324 | orchestrator | Tuesday 31 March 2026 03:45:23 +0000 (0:00:02.606) 0:01:01.276 ********* 2026-03-31 03:45:24.797363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-31 03:45:24.797410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-31 03:45:24.797447 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:45:24.797470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-31 03:45:24.797491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-31 03:45:24.797511 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:45:24.797530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-31 03:45:24.797551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-31 03:45:24.797579 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-31 03:45:24.797598 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:45:24.797619 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:45:24.797650 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-31 03:45:28.360368 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:45:28.360515 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-31 03:45:28.360531 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:45:28.360535 | orchestrator | 2026-03-31 03:45:28.360541 | orchestrator | TASK [ceilometer : Copying VMware vCenter CA file] ***************************** 2026-03-31 03:45:28.360546 | orchestrator | Tuesday 31 March 2026 03:45:24 +0000 (0:00:00.944) 0:01:02.220 ********* 2026-03-31 03:45:28.360550 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:45:28.360553 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:45:28.360557 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:45:28.360561 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:45:28.360565 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:45:28.360569 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:45:28.360573 | orchestrator | 2026-03-31 03:45:28.360576 | orchestrator | TASK [ceilometer : Copying over existing policy file] ************************** 2026-03-31 03:45:28.360580 | orchestrator | Tuesday 31 March 2026 03:45:25 +0000 (0:00:00.875) 0:01:03.095 ********* 2026-03-31 03:45:28.360586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-31 03:45:28.360592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-31 03:45:28.360597 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:45:28.360614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-31 03:45:28.360634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-31 03:45:28.360638 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:45:28.360654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-31 03:45:28.360658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-31 03:45:28.360662 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:45:28.360666 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-31 03:45:28.360670 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:45:28.360674 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-31 03:45:28.360678 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:45:28.360685 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-31 03:45:28.360693 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:45:28.360697 | orchestrator | 2026-03-31 03:45:28.360701 | orchestrator | TASK [ceilometer : Check ceilometer containers] ******************************** 2026-03-31 03:45:28.360705 | orchestrator | Tuesday 31 March 2026 03:45:26 +0000 (0:00:00.907) 0:01:04.003 ********* 2026-03-31 03:45:28.360714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-31 03:46:00.704118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-31 03:46:00.704264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-31 03:46:00.704295 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-31 03:46:00.704317 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-31 03:46:00.704384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-31 03:46:00.704407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-31 03:46:00.704448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-31 03:46:00.704467 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-31 03:46:00.704483 | orchestrator | 2026-03-31 03:46:00.704502 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-03-31 03:46:00.704521 | orchestrator | Tuesday 31 March 2026 03:45:28 +0000 (0:00:01.779) 0:01:05.782 ********* 2026-03-31 03:46:00.704539 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:46:00.704556 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:46:00.704573 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:46:00.704590 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:46:00.704606 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:46:00.704623 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:46:00.704641 | orchestrator | 2026-03-31 03:46:00.704658 | orchestrator | TASK [ceilometer : Running Ceilometer bootstrap container] ********************* 2026-03-31 03:46:00.704673 | orchestrator | Tuesday 31 March 2026 03:45:29 +0000 (0:00:00.661) 0:01:06.444 ********* 2026-03-31 03:46:00.704689 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:46:00.704719 | orchestrator | 2026-03-31 03:46:00.704734 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-31 03:46:00.704749 | orchestrator | Tuesday 31 March 2026 03:45:32 +0000 (0:00:03.960) 0:01:10.405 ********* 2026-03-31 03:46:00.704765 | orchestrator | 2026-03-31 03:46:00.704780 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-31 03:46:00.704796 | orchestrator | Tuesday 31 March 2026 03:45:33 +0000 (0:00:00.145) 0:01:10.550 ********* 2026-03-31 03:46:00.704812 | orchestrator | 2026-03-31 03:46:00.704828 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-31 03:46:00.704844 | orchestrator | Tuesday 31 March 2026 03:45:33 +0000 (0:00:00.144) 0:01:10.695 ********* 2026-03-31 03:46:00.704860 | orchestrator | 2026-03-31 03:46:00.704876 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-31 03:46:00.704923 | orchestrator | Tuesday 31 March 2026 03:45:33 +0000 (0:00:00.363) 0:01:11.059 ********* 2026-03-31 03:46:00.704940 | orchestrator | 2026-03-31 03:46:00.704956 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-31 03:46:00.704972 | orchestrator | Tuesday 31 March 2026 03:45:33 +0000 (0:00:00.079) 0:01:11.138 ********* 2026-03-31 03:46:00.704987 | orchestrator | 2026-03-31 03:46:00.705003 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-31 03:46:00.705019 | orchestrator | Tuesday 31 March 2026 03:45:33 +0000 (0:00:00.069) 0:01:11.208 ********* 2026-03-31 03:46:00.705035 | orchestrator | 2026-03-31 03:46:00.705051 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-notification container] ******* 2026-03-31 03:46:00.705075 | orchestrator | Tuesday 31 March 2026 03:45:33 +0000 (0:00:00.074) 0:01:11.282 ********* 2026-03-31 03:46:00.705091 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:46:00.705106 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:46:00.705121 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:46:00.705137 | orchestrator | 2026-03-31 03:46:00.705152 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-central container] ************ 2026-03-31 03:46:00.705168 | orchestrator | Tuesday 31 March 2026 03:45:39 +0000 (0:00:05.469) 0:01:16.752 ********* 2026-03-31 03:46:00.705183 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:46:00.705199 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:46:00.705214 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:46:00.705230 | orchestrator | 2026-03-31 03:46:00.705245 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-compute container] ************ 2026-03-31 03:46:00.705261 | orchestrator | Tuesday 31 March 2026 03:45:48 +0000 (0:00:09.201) 0:01:25.953 ********* 2026-03-31 03:46:00.705276 | orchestrator | changed: [testbed-node-5] 2026-03-31 03:46:00.705292 | orchestrator | changed: [testbed-node-3] 2026-03-31 03:46:00.705307 | orchestrator | changed: [testbed-node-4] 2026-03-31 03:46:00.705323 | orchestrator | 2026-03-31 03:46:00.705339 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 03:46:00.705356 | orchestrator | testbed-node-0 : ok=29  changed=13  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-31 03:46:00.705374 | orchestrator | testbed-node-1 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-31 03:46:00.705404 | orchestrator | testbed-node-2 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-31 03:46:01.257710 | orchestrator | testbed-node-3 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-31 03:46:01.257838 | orchestrator | testbed-node-4 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-31 03:46:01.257871 | orchestrator | testbed-node-5 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-31 03:46:01.257984 | orchestrator | 2026-03-31 03:46:01.258004 | orchestrator | 2026-03-31 03:46:01.258083 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 03:46:01.258106 | orchestrator | Tuesday 31 March 2026 03:46:00 +0000 (0:00:12.165) 0:01:38.118 ********* 2026-03-31 03:46:01.258128 | orchestrator | =============================================================================== 2026-03-31 03:46:01.258147 | orchestrator | ceilometer : Restart ceilometer-compute container ---------------------- 12.17s 2026-03-31 03:46:01.258166 | orchestrator | ceilometer : Restart ceilometer-central container ----------------------- 9.20s 2026-03-31 03:46:01.258187 | orchestrator | ceilometer : Restart ceilometer-notification container ------------------ 5.47s 2026-03-31 03:46:01.258206 | orchestrator | ceilometer : Copying over ceilometer.conf ------------------------------- 5.26s 2026-03-31 03:46:01.258218 | orchestrator | ceilometer : Running Ceilometer bootstrap container --------------------- 3.96s 2026-03-31 03:46:01.258235 | orchestrator | service-ks-register : ceilometer | Granting user roles ------------------ 3.84s 2026-03-31 03:46:01.258253 | orchestrator | service-ks-register : ceilometer | Creating users ----------------------- 3.77s 2026-03-31 03:46:01.258272 | orchestrator | service-ks-register : ceilometer | Creating projects -------------------- 3.53s 2026-03-31 03:46:01.258291 | orchestrator | service-ks-register : ceilometer | Creating roles ----------------------- 3.07s 2026-03-31 03:46:01.258310 | orchestrator | ceilometer : Copying over custom pipeline.yaml file --------------------- 2.61s 2026-03-31 03:46:01.258328 | orchestrator | ceilometer : Copying over config.json files for services ---------------- 2.58s 2026-03-31 03:46:01.258346 | orchestrator | service-cert-copy : ceilometer | Copying over extra CA certificates ----- 2.29s 2026-03-31 03:46:01.258366 | orchestrator | ceilometer : Check if the folder for custom meter definitions exist ----- 1.90s 2026-03-31 03:46:01.258384 | orchestrator | ceilometer : Check custom pipeline.yaml exists -------------------------- 1.84s 2026-03-31 03:46:01.258403 | orchestrator | ceilometer : Check ceilometer containers -------------------------------- 1.78s 2026-03-31 03:46:01.258420 | orchestrator | ceilometer : Check custom event_definitions.yaml exists ----------------- 1.71s 2026-03-31 03:46:01.258438 | orchestrator | ceilometer : Ensuring config directories exist -------------------------- 1.69s 2026-03-31 03:46:01.258458 | orchestrator | service-cert-copy : ceilometer | Copying over backend internal TLS key --- 1.66s 2026-03-31 03:46:01.258477 | orchestrator | ceilometer : Copying over event_definitions.yaml for notification service --- 1.65s 2026-03-31 03:46:01.258496 | orchestrator | ceilometer : Check if custom polling.yaml exists ------------------------ 1.53s 2026-03-31 03:46:03.723444 | orchestrator | 2026-03-31 03:46:03 | INFO  | Task ba1effac-e105-4731-8bb7-b2dc99a22c55 (aodh) was prepared for execution. 2026-03-31 03:46:03.723570 | orchestrator | 2026-03-31 03:46:03 | INFO  | It takes a moment until task ba1effac-e105-4731-8bb7-b2dc99a22c55 (aodh) has been started and output is visible here. 2026-03-31 03:46:34.206718 | orchestrator | 2026-03-31 03:46:34.206862 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-31 03:46:34.206956 | orchestrator | 2026-03-31 03:46:34.206976 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-31 03:46:34.207015 | orchestrator | Tuesday 31 March 2026 03:46:08 +0000 (0:00:00.281) 0:00:00.281 ********* 2026-03-31 03:46:34.207035 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:46:34.207054 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:46:34.207072 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:46:34.207090 | orchestrator | 2026-03-31 03:46:34.207108 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-31 03:46:34.207127 | orchestrator | Tuesday 31 March 2026 03:46:08 +0000 (0:00:00.322) 0:00:00.603 ********* 2026-03-31 03:46:34.207145 | orchestrator | ok: [testbed-node-0] => (item=enable_aodh_True) 2026-03-31 03:46:34.207163 | orchestrator | ok: [testbed-node-1] => (item=enable_aodh_True) 2026-03-31 03:46:34.207182 | orchestrator | ok: [testbed-node-2] => (item=enable_aodh_True) 2026-03-31 03:46:34.207201 | orchestrator | 2026-03-31 03:46:34.207252 | orchestrator | PLAY [Apply role aodh] ********************************************************* 2026-03-31 03:46:34.207272 | orchestrator | 2026-03-31 03:46:34.207292 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-03-31 03:46:34.207310 | orchestrator | Tuesday 31 March 2026 03:46:08 +0000 (0:00:00.474) 0:00:01.078 ********* 2026-03-31 03:46:34.207330 | orchestrator | included: /ansible/roles/aodh/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 03:46:34.207351 | orchestrator | 2026-03-31 03:46:34.207370 | orchestrator | TASK [service-ks-register : aodh | Creating services] ************************** 2026-03-31 03:46:34.207391 | orchestrator | Tuesday 31 March 2026 03:46:09 +0000 (0:00:00.589) 0:00:01.667 ********* 2026-03-31 03:46:34.207411 | orchestrator | changed: [testbed-node-0] => (item=aodh (alarming)) 2026-03-31 03:46:34.207430 | orchestrator | 2026-03-31 03:46:34.207449 | orchestrator | TASK [service-ks-register : aodh | Creating endpoints] ************************* 2026-03-31 03:46:34.207468 | orchestrator | Tuesday 31 March 2026 03:46:12 +0000 (0:00:03.261) 0:00:04.928 ********* 2026-03-31 03:46:34.207487 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api-int.testbed.osism.xyz:8042 -> internal) 2026-03-31 03:46:34.207505 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api.testbed.osism.xyz:8042 -> public) 2026-03-31 03:46:34.207523 | orchestrator | 2026-03-31 03:46:34.207542 | orchestrator | TASK [service-ks-register : aodh | Creating projects] ************************** 2026-03-31 03:46:34.207563 | orchestrator | Tuesday 31 March 2026 03:46:18 +0000 (0:00:05.934) 0:00:10.863 ********* 2026-03-31 03:46:34.207583 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-31 03:46:34.207603 | orchestrator | 2026-03-31 03:46:34.207621 | orchestrator | TASK [service-ks-register : aodh | Creating users] ***************************** 2026-03-31 03:46:34.207640 | orchestrator | Tuesday 31 March 2026 03:46:22 +0000 (0:00:03.339) 0:00:14.202 ********* 2026-03-31 03:46:34.207658 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-31 03:46:34.207675 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service) 2026-03-31 03:46:34.207693 | orchestrator | 2026-03-31 03:46:34.207711 | orchestrator | TASK [service-ks-register : aodh | Creating roles] ***************************** 2026-03-31 03:46:34.207729 | orchestrator | Tuesday 31 March 2026 03:46:25 +0000 (0:00:03.574) 0:00:17.777 ********* 2026-03-31 03:46:34.207747 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-31 03:46:34.207764 | orchestrator | 2026-03-31 03:46:34.207783 | orchestrator | TASK [service-ks-register : aodh | Granting user roles] ************************ 2026-03-31 03:46:34.207801 | orchestrator | Tuesday 31 March 2026 03:46:28 +0000 (0:00:03.042) 0:00:20.820 ********* 2026-03-31 03:46:34.207820 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service -> admin) 2026-03-31 03:46:34.207837 | orchestrator | 2026-03-31 03:46:34.207856 | orchestrator | TASK [aodh : Ensuring config directories exist] ******************************** 2026-03-31 03:46:34.207901 | orchestrator | Tuesday 31 March 2026 03:46:32 +0000 (0:00:03.495) 0:00:24.315 ********* 2026-03-31 03:46:34.207927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-31 03:46:34.207995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-31 03:46:34.208036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-31 03:46:34.208058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-31 03:46:34.208079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-31 03:46:34.208098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-31 03:46:34.208117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-31 03:46:34.208159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-31 03:46:35.560319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-31 03:46:35.560408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-31 03:46:35.560420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-31 03:46:35.560429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-31 03:46:35.560439 | orchestrator | 2026-03-31 03:46:35.560449 | orchestrator | TASK [aodh : Check if policies shall be overwritten] *************************** 2026-03-31 03:46:35.560460 | orchestrator | Tuesday 31 March 2026 03:46:34 +0000 (0:00:02.003) 0:00:26.318 ********* 2026-03-31 03:46:35.560469 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:46:35.560478 | orchestrator | 2026-03-31 03:46:35.560487 | orchestrator | TASK [aodh : Set aodh policy file] ********************************************* 2026-03-31 03:46:35.560495 | orchestrator | Tuesday 31 March 2026 03:46:34 +0000 (0:00:00.134) 0:00:26.453 ********* 2026-03-31 03:46:35.560504 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:46:35.560513 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:46:35.560521 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:46:35.560530 | orchestrator | 2026-03-31 03:46:35.560539 | orchestrator | TASK [aodh : Copying over existing policy file] ******************************** 2026-03-31 03:46:35.560547 | orchestrator | Tuesday 31 March 2026 03:46:34 +0000 (0:00:00.533) 0:00:26.986 ********* 2026-03-31 03:46:35.560577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-31 03:46:35.560609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-31 03:46:35.560619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-31 03:46:35.560629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-31 03:46:35.560638 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:46:35.560647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-31 03:46:35.560656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-31 03:46:35.560679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-31 03:46:35.560699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-31 03:46:40.673965 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:46:40.674114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-31 03:46:40.674132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-31 03:46:40.674142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-31 03:46:40.674149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-31 03:46:40.674176 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:46:40.674184 | orchestrator | 2026-03-31 03:46:40.674192 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-03-31 03:46:40.674200 | orchestrator | Tuesday 31 March 2026 03:46:35 +0000 (0:00:00.692) 0:00:27.679 ********* 2026-03-31 03:46:40.674207 | orchestrator | included: /ansible/roles/aodh/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 03:46:40.674214 | orchestrator | 2026-03-31 03:46:40.674221 | orchestrator | TASK [service-cert-copy : aodh | Copying over extra CA certificates] *********** 2026-03-31 03:46:40.674228 | orchestrator | Tuesday 31 March 2026 03:46:36 +0000 (0:00:00.789) 0:00:28.468 ********* 2026-03-31 03:46:40.674247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-31 03:46:40.674270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-31 03:46:40.674278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-31 03:46:40.674285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-31 03:46:40.674298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-31 03:46:40.674306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-31 03:46:40.674317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-31 03:46:40.674330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-31 03:46:41.337340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-31 03:46:41.337416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-31 03:46:41.337425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-31 03:46:41.337452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-31 03:46:41.337459 | orchestrator | 2026-03-31 03:46:41.337466 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS certificate] *** 2026-03-31 03:46:41.337474 | orchestrator | Tuesday 31 March 2026 03:46:40 +0000 (0:00:04.316) 0:00:32.785 ********* 2026-03-31 03:46:41.337482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-31 03:46:41.337500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-31 03:46:41.337520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-31 03:46:41.337527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-31 03:46:41.337533 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:46:41.337545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-31 03:46:41.337551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-31 03:46:41.337558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-31 03:46:41.337573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-31 03:46:41.337579 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:46:41.337592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-31 03:46:42.420115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-31 03:46:42.420266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-31 03:46:42.420287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-31 03:46:42.420300 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:46:42.420314 | orchestrator | 2026-03-31 03:46:42.420326 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS key] ******** 2026-03-31 03:46:42.420339 | orchestrator | Tuesday 31 March 2026 03:46:41 +0000 (0:00:00.669) 0:00:33.454 ********* 2026-03-31 03:46:42.420351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-31 03:46:42.420379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-31 03:46:42.420391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-31 03:46:42.420423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-31 03:46:42.420443 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:46:42.420455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-31 03:46:42.420466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-31 03:46:42.420478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-31 03:46:42.420495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-31 03:46:42.420507 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:46:42.420527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-31 03:46:46.717851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-31 03:46:46.718076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-31 03:46:46.718104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-31 03:46:46.718120 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:46:46.718136 | orchestrator | 2026-03-31 03:46:46.718149 | orchestrator | TASK [aodh : Copying over config.json files for services] ********************** 2026-03-31 03:46:46.718162 | orchestrator | Tuesday 31 March 2026 03:46:42 +0000 (0:00:01.078) 0:00:34.533 ********* 2026-03-31 03:46:46.718177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-31 03:46:46.718210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-31 03:46:46.718277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-31 03:46:46.718293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-31 03:46:46.718307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-31 03:46:46.718322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-31 03:46:46.718336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-31 03:46:46.718355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-31 03:46:46.718368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-31 03:46:46.718408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-31 03:46:55.792095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-31 03:46:55.792239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-31 03:46:55.792266 | orchestrator | 2026-03-31 03:46:55.792286 | orchestrator | TASK [aodh : Copying over aodh.conf] ******************************************* 2026-03-31 03:46:55.792306 | orchestrator | Tuesday 31 March 2026 03:46:46 +0000 (0:00:04.296) 0:00:38.829 ********* 2026-03-31 03:46:55.792327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-31 03:46:55.792371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-31 03:46:55.792433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-31 03:46:55.792476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-31 03:46:55.792490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-31 03:46:55.792501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-31 03:46:55.792513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-31 03:46:55.792531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-31 03:46:55.792554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-31 03:46:55.792568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-31 03:46:55.792589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-31 03:47:01.112755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-31 03:47:01.112869 | orchestrator | 2026-03-31 03:47:01.112964 | orchestrator | TASK [aodh : Copying over wsgi-aodh files for services] ************************ 2026-03-31 03:47:01.112986 | orchestrator | Tuesday 31 March 2026 03:46:55 +0000 (0:00:09.075) 0:00:47.905 ********* 2026-03-31 03:47:01.113006 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:47:01.113018 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:47:01.113029 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:47:01.113041 | orchestrator | 2026-03-31 03:47:01.113052 | orchestrator | TASK [aodh : Check aodh containers] ******************************************** 2026-03-31 03:47:01.113063 | orchestrator | Tuesday 31 March 2026 03:46:57 +0000 (0:00:01.861) 0:00:49.766 ********* 2026-03-31 03:47:01.113076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-31 03:47:01.113136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-31 03:47:01.113150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-31 03:47:01.113182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-31 03:47:01.113195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-31 03:47:01.113206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-31 03:47:01.113218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-31 03:47:01.113246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-31 03:47:01.113259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-31 03:47:01.113273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-31 03:47:01.113295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-31 03:47:53.963430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-31 03:47:53.963531 | orchestrator | 2026-03-31 03:47:53.963544 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-03-31 03:47:53.963554 | orchestrator | Tuesday 31 March 2026 03:47:01 +0000 (0:00:03.460) 0:00:53.227 ********* 2026-03-31 03:47:53.963561 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:47:53.963569 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:47:53.963576 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:47:53.963584 | orchestrator | 2026-03-31 03:47:53.963591 | orchestrator | TASK [aodh : Creating aodh database] ******************************************* 2026-03-31 03:47:53.963620 | orchestrator | Tuesday 31 March 2026 03:47:01 +0000 (0:00:00.352) 0:00:53.580 ********* 2026-03-31 03:47:53.963628 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:47:53.963635 | orchestrator | 2026-03-31 03:47:53.963642 | orchestrator | TASK [aodh : Creating aodh database user and setting permissions] ************** 2026-03-31 03:47:53.963649 | orchestrator | Tuesday 31 March 2026 03:47:03 +0000 (0:00:01.982) 0:00:55.562 ********* 2026-03-31 03:47:53.963656 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:47:53.963663 | orchestrator | 2026-03-31 03:47:53.963671 | orchestrator | TASK [aodh : Running aodh bootstrap container] ********************************* 2026-03-31 03:47:53.963678 | orchestrator | Tuesday 31 March 2026 03:47:05 +0000 (0:00:02.183) 0:00:57.745 ********* 2026-03-31 03:47:53.963685 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:47:53.963692 | orchestrator | 2026-03-31 03:47:53.963699 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-03-31 03:47:53.963706 | orchestrator | Tuesday 31 March 2026 03:47:17 +0000 (0:00:12.371) 0:01:10.117 ********* 2026-03-31 03:47:53.963713 | orchestrator | 2026-03-31 03:47:53.963720 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-03-31 03:47:53.963741 | orchestrator | Tuesday 31 March 2026 03:47:18 +0000 (0:00:00.076) 0:01:10.194 ********* 2026-03-31 03:47:53.963748 | orchestrator | 2026-03-31 03:47:53.963755 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-03-31 03:47:53.963763 | orchestrator | Tuesday 31 March 2026 03:47:18 +0000 (0:00:00.072) 0:01:10.267 ********* 2026-03-31 03:47:53.963770 | orchestrator | 2026-03-31 03:47:53.963778 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-api container] **************************** 2026-03-31 03:47:53.963785 | orchestrator | Tuesday 31 March 2026 03:47:18 +0000 (0:00:00.284) 0:01:10.552 ********* 2026-03-31 03:47:53.963792 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:47:53.963799 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:47:53.963806 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:47:53.963813 | orchestrator | 2026-03-31 03:47:53.963819 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-evaluator container] ********************** 2026-03-31 03:47:53.963827 | orchestrator | Tuesday 31 March 2026 03:47:24 +0000 (0:00:06.128) 0:01:16.680 ********* 2026-03-31 03:47:53.963834 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:47:53.963841 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:47:53.963848 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:47:53.963854 | orchestrator | 2026-03-31 03:47:53.963862 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-listener container] *********************** 2026-03-31 03:47:53.963943 | orchestrator | Tuesday 31 March 2026 03:47:34 +0000 (0:00:10.314) 0:01:26.994 ********* 2026-03-31 03:47:53.963953 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:47:53.963961 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:47:53.963969 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:47:53.963977 | orchestrator | 2026-03-31 03:47:53.963986 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-notifier container] *********************** 2026-03-31 03:47:53.963994 | orchestrator | Tuesday 31 March 2026 03:47:43 +0000 (0:00:08.290) 0:01:35.284 ********* 2026-03-31 03:47:53.964002 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:47:53.964010 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:47:53.964018 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:47:53.964026 | orchestrator | 2026-03-31 03:47:53.964034 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 03:47:53.964043 | orchestrator | testbed-node-0 : ok=23  changed=17  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-31 03:47:53.964053 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-31 03:47:53.964062 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-31 03:47:53.964080 | orchestrator | 2026-03-31 03:47:53.964094 | orchestrator | 2026-03-31 03:47:53.964106 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 03:47:53.964118 | orchestrator | Tuesday 31 March 2026 03:47:53 +0000 (0:00:10.402) 0:01:45.687 ********* 2026-03-31 03:47:53.964129 | orchestrator | =============================================================================== 2026-03-31 03:47:53.964143 | orchestrator | aodh : Running aodh bootstrap container -------------------------------- 12.37s 2026-03-31 03:47:53.964163 | orchestrator | aodh : Restart aodh-notifier container --------------------------------- 10.40s 2026-03-31 03:47:53.964192 | orchestrator | aodh : Restart aodh-evaluator container -------------------------------- 10.31s 2026-03-31 03:47:53.964205 | orchestrator | aodh : Copying over aodh.conf ------------------------------------------- 9.08s 2026-03-31 03:47:53.964217 | orchestrator | aodh : Restart aodh-listener container ---------------------------------- 8.29s 2026-03-31 03:47:53.964230 | orchestrator | aodh : Restart aodh-api container --------------------------------------- 6.13s 2026-03-31 03:47:53.964243 | orchestrator | service-ks-register : aodh | Creating endpoints ------------------------- 5.93s 2026-03-31 03:47:53.964254 | orchestrator | service-cert-copy : aodh | Copying over extra CA certificates ----------- 4.32s 2026-03-31 03:47:53.964267 | orchestrator | aodh : Copying over config.json files for services ---------------------- 4.30s 2026-03-31 03:47:53.964279 | orchestrator | service-ks-register : aodh | Creating users ----------------------------- 3.57s 2026-03-31 03:47:53.964292 | orchestrator | service-ks-register : aodh | Granting user roles ------------------------ 3.50s 2026-03-31 03:47:53.964305 | orchestrator | aodh : Check aodh containers -------------------------------------------- 3.46s 2026-03-31 03:47:53.964318 | orchestrator | service-ks-register : aodh | Creating projects -------------------------- 3.34s 2026-03-31 03:47:53.964386 | orchestrator | service-ks-register : aodh | Creating services -------------------------- 3.26s 2026-03-31 03:47:53.964401 | orchestrator | service-ks-register : aodh | Creating roles ----------------------------- 3.04s 2026-03-31 03:47:53.964413 | orchestrator | aodh : Creating aodh database user and setting permissions -------------- 2.18s 2026-03-31 03:47:53.964424 | orchestrator | aodh : Ensuring config directories exist -------------------------------- 2.00s 2026-03-31 03:47:53.964438 | orchestrator | aodh : Creating aodh database ------------------------------------------- 1.98s 2026-03-31 03:47:53.964449 | orchestrator | aodh : Copying over wsgi-aodh files for services ------------------------ 1.86s 2026-03-31 03:47:53.964461 | orchestrator | service-cert-copy : aodh | Copying over backend internal TLS key -------- 1.08s 2026-03-31 03:47:56.592991 | orchestrator | 2026-03-31 03:47:56 | INFO  | Task d4c1d5c0-2924-4173-ba8c-9ad81226734f (kolla-ceph-rgw) was prepared for execution. 2026-03-31 03:47:56.593078 | orchestrator | 2026-03-31 03:47:56 | INFO  | It takes a moment until task d4c1d5c0-2924-4173-ba8c-9ad81226734f (kolla-ceph-rgw) has been started and output is visible here. 2026-03-31 03:48:33.655422 | orchestrator | 2026-03-31 03:48:33.655549 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-31 03:48:33.655566 | orchestrator | 2026-03-31 03:48:33.655577 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-31 03:48:33.655587 | orchestrator | Tuesday 31 March 2026 03:48:01 +0000 (0:00:00.293) 0:00:00.293 ********* 2026-03-31 03:48:33.655597 | orchestrator | ok: [testbed-manager] 2026-03-31 03:48:33.655608 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:48:33.655618 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:48:33.655628 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:48:33.655639 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:48:33.655656 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:48:33.655680 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:48:33.655699 | orchestrator | 2026-03-31 03:48:33.655715 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-31 03:48:33.655731 | orchestrator | Tuesday 31 March 2026 03:48:01 +0000 (0:00:00.882) 0:00:01.176 ********* 2026-03-31 03:48:33.655748 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-03-31 03:48:33.655796 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-03-31 03:48:33.655816 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-03-31 03:48:33.655827 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-03-31 03:48:33.655836 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-03-31 03:48:33.655845 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-03-31 03:48:33.655855 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-03-31 03:48:33.655865 | orchestrator | 2026-03-31 03:48:33.655957 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-31 03:48:33.655969 | orchestrator | 2026-03-31 03:48:33.655981 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-03-31 03:48:33.655992 | orchestrator | Tuesday 31 March 2026 03:48:02 +0000 (0:00:00.789) 0:00:01.966 ********* 2026-03-31 03:48:33.656005 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 03:48:33.656018 | orchestrator | 2026-03-31 03:48:33.656029 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-03-31 03:48:33.656040 | orchestrator | Tuesday 31 March 2026 03:48:04 +0000 (0:00:01.595) 0:00:03.561 ********* 2026-03-31 03:48:33.656051 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-03-31 03:48:33.656062 | orchestrator | 2026-03-31 03:48:33.656072 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-03-31 03:48:33.656083 | orchestrator | Tuesday 31 March 2026 03:48:08 +0000 (0:00:04.051) 0:00:07.613 ********* 2026-03-31 03:48:33.656094 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-03-31 03:48:33.656113 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-03-31 03:48:33.656137 | orchestrator | 2026-03-31 03:48:33.656158 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-03-31 03:48:33.656175 | orchestrator | Tuesday 31 March 2026 03:48:14 +0000 (0:00:06.426) 0:00:14.039 ********* 2026-03-31 03:48:33.656192 | orchestrator | ok: [testbed-manager] => (item=service) 2026-03-31 03:48:33.656207 | orchestrator | 2026-03-31 03:48:33.656225 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-03-31 03:48:33.656243 | orchestrator | Tuesday 31 March 2026 03:48:18 +0000 (0:00:03.260) 0:00:17.300 ********* 2026-03-31 03:48:33.656260 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-31 03:48:33.656279 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-03-31 03:48:33.656297 | orchestrator | 2026-03-31 03:48:33.656313 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-03-31 03:48:33.656323 | orchestrator | Tuesday 31 March 2026 03:48:21 +0000 (0:00:03.785) 0:00:21.085 ********* 2026-03-31 03:48:33.656332 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-03-31 03:48:33.656342 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-03-31 03:48:33.656352 | orchestrator | 2026-03-31 03:48:33.656361 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-03-31 03:48:33.656371 | orchestrator | Tuesday 31 March 2026 03:48:28 +0000 (0:00:06.396) 0:00:27.482 ********* 2026-03-31 03:48:33.656380 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-03-31 03:48:33.656389 | orchestrator | 2026-03-31 03:48:33.656399 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 03:48:33.656408 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 03:48:33.656418 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 03:48:33.656439 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 03:48:33.656449 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 03:48:33.656459 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 03:48:33.656488 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 03:48:33.656506 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 03:48:33.656516 | orchestrator | 2026-03-31 03:48:33.656526 | orchestrator | 2026-03-31 03:48:33.656536 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 03:48:33.656545 | orchestrator | Tuesday 31 March 2026 03:48:33 +0000 (0:00:04.930) 0:00:32.412 ********* 2026-03-31 03:48:33.656555 | orchestrator | =============================================================================== 2026-03-31 03:48:33.656564 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.43s 2026-03-31 03:48:33.656574 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.40s 2026-03-31 03:48:33.656583 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.93s 2026-03-31 03:48:33.656593 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 4.05s 2026-03-31 03:48:33.656602 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.79s 2026-03-31 03:48:33.656612 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.26s 2026-03-31 03:48:33.656621 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.60s 2026-03-31 03:48:33.656630 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.88s 2026-03-31 03:48:33.656640 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.79s 2026-03-31 03:48:36.276747 | orchestrator | 2026-03-31 03:48:36 | INFO  | Task 450c3ad7-ec5c-449b-86ef-74def1f74562 (gnocchi) was prepared for execution. 2026-03-31 03:48:36.276841 | orchestrator | 2026-03-31 03:48:36 | INFO  | It takes a moment until task 450c3ad7-ec5c-449b-86ef-74def1f74562 (gnocchi) has been started and output is visible here. 2026-03-31 03:48:42.471394 | orchestrator | 2026-03-31 03:48:42.471497 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-31 03:48:42.471510 | orchestrator | 2026-03-31 03:48:42.471520 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-31 03:48:42.471529 | orchestrator | Tuesday 31 March 2026 03:48:41 +0000 (0:00:00.299) 0:00:00.299 ********* 2026-03-31 03:48:42.471538 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:48:42.471548 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:48:42.471557 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:48:42.471566 | orchestrator | 2026-03-31 03:48:42.471575 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-31 03:48:42.471584 | orchestrator | Tuesday 31 March 2026 03:48:41 +0000 (0:00:00.376) 0:00:00.676 ********* 2026-03-31 03:48:42.471592 | orchestrator | ok: [testbed-node-0] => (item=enable_gnocchi_False) 2026-03-31 03:48:42.471601 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_gnocchi_True 2026-03-31 03:48:42.471611 | orchestrator | ok: [testbed-node-1] => (item=enable_gnocchi_False) 2026-03-31 03:48:42.471620 | orchestrator | ok: [testbed-node-2] => (item=enable_gnocchi_False) 2026-03-31 03:48:42.471628 | orchestrator | 2026-03-31 03:48:42.471637 | orchestrator | PLAY [Apply role gnocchi] ****************************************************** 2026-03-31 03:48:42.471646 | orchestrator | skipping: no hosts matched 2026-03-31 03:48:42.471656 | orchestrator | 2026-03-31 03:48:42.471696 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 03:48:42.471706 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 03:48:42.471717 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 03:48:42.471725 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 03:48:42.471734 | orchestrator | 2026-03-31 03:48:42.471743 | orchestrator | 2026-03-31 03:48:42.471752 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 03:48:42.471760 | orchestrator | Tuesday 31 March 2026 03:48:41 +0000 (0:00:00.486) 0:00:01.163 ********* 2026-03-31 03:48:42.471769 | orchestrator | =============================================================================== 2026-03-31 03:48:42.471785 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.49s 2026-03-31 03:48:42.471804 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.38s 2026-03-31 03:48:45.199276 | orchestrator | 2026-03-31 03:48:45 | INFO  | Task a09fee27-6e3f-4545-b87a-9e931ddaf2f3 (manila) was prepared for execution. 2026-03-31 03:48:45.199347 | orchestrator | 2026-03-31 03:48:45 | INFO  | It takes a moment until task a09fee27-6e3f-4545-b87a-9e931ddaf2f3 (manila) has been started and output is visible here. 2026-03-31 03:49:25.202305 | orchestrator | 2026-03-31 03:49:25.202416 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-31 03:49:25.202432 | orchestrator | 2026-03-31 03:49:25.202443 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-31 03:49:25.202453 | orchestrator | Tuesday 31 March 2026 03:48:49 +0000 (0:00:00.292) 0:00:00.292 ********* 2026-03-31 03:49:25.202463 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:49:25.202474 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:49:25.202485 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:49:25.202494 | orchestrator | 2026-03-31 03:49:25.202504 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-31 03:49:25.202514 | orchestrator | Tuesday 31 March 2026 03:48:50 +0000 (0:00:00.361) 0:00:00.654 ********* 2026-03-31 03:49:25.202524 | orchestrator | ok: [testbed-node-0] => (item=enable_manila_True) 2026-03-31 03:49:25.202550 | orchestrator | ok: [testbed-node-1] => (item=enable_manila_True) 2026-03-31 03:49:25.202560 | orchestrator | ok: [testbed-node-2] => (item=enable_manila_True) 2026-03-31 03:49:25.202570 | orchestrator | 2026-03-31 03:49:25.202580 | orchestrator | PLAY [Apply role manila] ******************************************************* 2026-03-31 03:49:25.202590 | orchestrator | 2026-03-31 03:49:25.202599 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-03-31 03:49:25.202609 | orchestrator | Tuesday 31 March 2026 03:48:50 +0000 (0:00:00.452) 0:00:01.106 ********* 2026-03-31 03:49:25.202618 | orchestrator | included: /ansible/roles/manila/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 03:49:25.202629 | orchestrator | 2026-03-31 03:49:25.202639 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-03-31 03:49:25.202649 | orchestrator | Tuesday 31 March 2026 03:48:51 +0000 (0:00:00.607) 0:00:01.714 ********* 2026-03-31 03:49:25.202658 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:49:25.202669 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:49:25.202679 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:49:25.202688 | orchestrator | 2026-03-31 03:49:25.202698 | orchestrator | TASK [service-ks-register : manila | Creating services] ************************ 2026-03-31 03:49:25.202708 | orchestrator | Tuesday 31 March 2026 03:48:51 +0000 (0:00:00.516) 0:00:02.230 ********* 2026-03-31 03:49:25.202717 | orchestrator | changed: [testbed-node-0] => (item=manila (share)) 2026-03-31 03:49:25.202727 | orchestrator | changed: [testbed-node-0] => (item=manilav2 (sharev2)) 2026-03-31 03:49:25.202789 | orchestrator | 2026-03-31 03:49:25.202801 | orchestrator | TASK [service-ks-register : manila | Creating endpoints] *********************** 2026-03-31 03:49:25.202810 | orchestrator | Tuesday 31 March 2026 03:48:57 +0000 (0:00:06.163) 0:00:08.393 ********* 2026-03-31 03:49:25.202821 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s -> internal) 2026-03-31 03:49:25.202831 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s -> public) 2026-03-31 03:49:25.202841 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api-int.testbed.osism.xyz:8786/v2 -> internal) 2026-03-31 03:49:25.202853 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api.testbed.osism.xyz:8786/v2 -> public) 2026-03-31 03:49:25.202941 | orchestrator | 2026-03-31 03:49:25.202957 | orchestrator | TASK [service-ks-register : manila | Creating projects] ************************ 2026-03-31 03:49:25.202968 | orchestrator | Tuesday 31 March 2026 03:49:09 +0000 (0:00:11.713) 0:00:20.106 ********* 2026-03-31 03:49:25.202980 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-31 03:49:25.202991 | orchestrator | 2026-03-31 03:49:25.203002 | orchestrator | TASK [service-ks-register : manila | Creating users] *************************** 2026-03-31 03:49:25.203014 | orchestrator | Tuesday 31 March 2026 03:49:12 +0000 (0:00:03.159) 0:00:23.266 ********* 2026-03-31 03:49:25.203025 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-31 03:49:25.203036 | orchestrator | changed: [testbed-node-0] => (item=manila -> service) 2026-03-31 03:49:25.203047 | orchestrator | 2026-03-31 03:49:25.203058 | orchestrator | TASK [service-ks-register : manila | Creating roles] *************************** 2026-03-31 03:49:25.203067 | orchestrator | Tuesday 31 March 2026 03:49:16 +0000 (0:00:03.704) 0:00:26.971 ********* 2026-03-31 03:49:25.203077 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-31 03:49:25.203088 | orchestrator | 2026-03-31 03:49:25.203097 | orchestrator | TASK [service-ks-register : manila | Granting user roles] ********************** 2026-03-31 03:49:25.203107 | orchestrator | Tuesday 31 March 2026 03:49:19 +0000 (0:00:03.051) 0:00:30.022 ********* 2026-03-31 03:49:25.203116 | orchestrator | changed: [testbed-node-0] => (item=manila -> service -> admin) 2026-03-31 03:49:25.203126 | orchestrator | 2026-03-31 03:49:25.203135 | orchestrator | TASK [manila : Ensuring config directories exist] ****************************** 2026-03-31 03:49:25.203145 | orchestrator | Tuesday 31 March 2026 03:49:23 +0000 (0:00:03.584) 0:00:33.607 ********* 2026-03-31 03:49:25.203176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-31 03:49:25.203199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-31 03:49:25.203229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-31 03:49:25.203242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-31 03:49:25.203254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-31 03:49:25.203264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-31 03:49:25.203282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-31 03:49:35.680002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-31 03:49:35.680112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-31 03:49:35.680122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-31 03:49:35.680130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-31 03:49:35.680137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-31 03:49:35.680144 | orchestrator | 2026-03-31 03:49:35.680152 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-03-31 03:49:35.680160 | orchestrator | Tuesday 31 March 2026 03:49:25 +0000 (0:00:02.262) 0:00:35.869 ********* 2026-03-31 03:49:35.680167 | orchestrator | included: /ansible/roles/manila/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 03:49:35.680173 | orchestrator | 2026-03-31 03:49:35.680179 | orchestrator | TASK [manila : Ensuring manila service ceph config subdir exists] ************** 2026-03-31 03:49:35.680185 | orchestrator | Tuesday 31 March 2026 03:49:25 +0000 (0:00:00.591) 0:00:36.461 ********* 2026-03-31 03:49:35.680192 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:49:35.680198 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:49:35.680204 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:49:35.680210 | orchestrator | 2026-03-31 03:49:35.680216 | orchestrator | TASK [manila : Copy over multiple ceph configs for Manila] ********************* 2026-03-31 03:49:35.680223 | orchestrator | Tuesday 31 March 2026 03:49:26 +0000 (0:00:00.977) 0:00:37.438 ********* 2026-03-31 03:49:35.680241 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-31 03:49:35.680275 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-31 03:49:35.680288 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-31 03:49:35.680294 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-31 03:49:35.680300 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-31 03:49:35.680307 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-31 03:49:35.680313 | orchestrator | 2026-03-31 03:49:35.680319 | orchestrator | TASK [manila : Copy over ceph Manila keyrings] ********************************* 2026-03-31 03:49:35.680325 | orchestrator | Tuesday 31 March 2026 03:49:28 +0000 (0:00:01.926) 0:00:39.365 ********* 2026-03-31 03:49:35.680331 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-31 03:49:35.680338 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-31 03:49:35.680344 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-31 03:49:35.680350 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-31 03:49:35.680356 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-31 03:49:35.680362 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-31 03:49:35.680368 | orchestrator | 2026-03-31 03:49:35.680374 | orchestrator | TASK [manila : Ensuring config directory has correct owner and permission] ***** 2026-03-31 03:49:35.680380 | orchestrator | Tuesday 31 March 2026 03:49:30 +0000 (0:00:01.266) 0:00:40.632 ********* 2026-03-31 03:49:35.680388 | orchestrator | ok: [testbed-node-0] => (item=manila-share) 2026-03-31 03:49:35.680394 | orchestrator | ok: [testbed-node-1] => (item=manila-share) 2026-03-31 03:49:35.680400 | orchestrator | ok: [testbed-node-2] => (item=manila-share) 2026-03-31 03:49:35.680406 | orchestrator | 2026-03-31 03:49:35.680412 | orchestrator | TASK [manila : Check if policies shall be overwritten] ************************* 2026-03-31 03:49:35.680419 | orchestrator | Tuesday 31 March 2026 03:49:30 +0000 (0:00:00.694) 0:00:41.326 ********* 2026-03-31 03:49:35.680425 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:49:35.680431 | orchestrator | 2026-03-31 03:49:35.680437 | orchestrator | TASK [manila : Set manila policy file] ***************************************** 2026-03-31 03:49:35.680443 | orchestrator | Tuesday 31 March 2026 03:49:30 +0000 (0:00:00.148) 0:00:41.474 ********* 2026-03-31 03:49:35.680449 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:49:35.680455 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:49:35.680461 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:49:35.680467 | orchestrator | 2026-03-31 03:49:35.680473 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-03-31 03:49:35.680485 | orchestrator | Tuesday 31 March 2026 03:49:31 +0000 (0:00:00.578) 0:00:42.053 ********* 2026-03-31 03:49:35.680492 | orchestrator | included: /ansible/roles/manila/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 03:49:35.680498 | orchestrator | 2026-03-31 03:49:35.680505 | orchestrator | TASK [service-cert-copy : manila | Copying over extra CA certificates] ********* 2026-03-31 03:49:35.680512 | orchestrator | Tuesday 31 March 2026 03:49:32 +0000 (0:00:00.627) 0:00:42.680 ********* 2026-03-31 03:49:35.680525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-31 03:49:36.602838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-31 03:49:36.603033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-31 03:49:36.603062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-31 03:49:36.603086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-31 03:49:36.603138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-31 03:49:36.603171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-31 03:49:36.603193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-31 03:49:36.603205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-31 03:49:36.603217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-31 03:49:36.603228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-31 03:49:36.603248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-31 03:49:36.603259 | orchestrator | 2026-03-31 03:49:36.603272 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS certificate] *** 2026-03-31 03:49:36.603285 | orchestrator | Tuesday 31 March 2026 03:49:35 +0000 (0:00:03.666) 0:00:46.346 ********* 2026-03-31 03:49:36.603310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-31 03:49:37.294288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 03:49:37.294384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-31 03:49:37.294397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-31 03:49:37.294434 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:49:37.294446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-31 03:49:37.294456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 03:49:37.294478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-31 03:49:37.294504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-31 03:49:37.294514 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:49:37.294523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-31 03:49:37.294533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 03:49:37.294548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-31 03:49:37.294557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-31 03:49:37.294566 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:49:37.294575 | orchestrator | 2026-03-31 03:49:37.294585 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS key] ****** 2026-03-31 03:49:37.294595 | orchestrator | Tuesday 31 March 2026 03:49:36 +0000 (0:00:00.931) 0:00:47.277 ********* 2026-03-31 03:49:37.294617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-31 03:49:41.975573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 03:49:41.975680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-31 03:49:41.975723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-31 03:49:41.975733 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:49:41.975743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-31 03:49:41.975752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 03:49:41.975773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-31 03:49:41.975797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-31 03:49:41.975805 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:49:41.975813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-31 03:49:41.975827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 03:49:41.975834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-31 03:49:41.975842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-31 03:49:41.975850 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:49:41.975858 | orchestrator | 2026-03-31 03:49:41.975910 | orchestrator | TASK [manila : Copying over config.json files for services] ******************** 2026-03-31 03:49:41.975921 | orchestrator | Tuesday 31 March 2026 03:49:37 +0000 (0:00:00.918) 0:00:48.196 ********* 2026-03-31 03:49:41.975942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-31 03:49:49.098844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-31 03:49:49.099038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-31 03:49:49.099055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-31 03:49:49.099068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-31 03:49:49.099093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-31 03:49:49.099119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-31 03:49:49.099139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-31 03:49:49.099150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-31 03:49:49.099160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-31 03:49:49.099170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-31 03:49:49.099180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-31 03:49:49.099190 | orchestrator | 2026-03-31 03:49:49.099206 | orchestrator | TASK [manila : Copying over manila.conf] *************************************** 2026-03-31 03:49:49.099218 | orchestrator | Tuesday 31 March 2026 03:49:42 +0000 (0:00:04.698) 0:00:52.894 ********* 2026-03-31 03:49:49.099235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-31 03:49:53.552315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-31 03:49:53.552453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-31 03:49:53.552483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-31 03:49:53.552507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-31 03:49:53.552551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-31 03:49:53.552665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-31 03:49:53.552698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-31 03:49:53.552716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-31 03:49:53.552734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-31 03:49:53.552754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-31 03:49:53.552782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-31 03:49:53.552801 | orchestrator | 2026-03-31 03:49:53.552819 | orchestrator | TASK [manila : Copying over manila-share.conf] ********************************* 2026-03-31 03:49:53.552852 | orchestrator | Tuesday 31 March 2026 03:49:49 +0000 (0:00:06.896) 0:00:59.791 ********* 2026-03-31 03:49:53.552904 | orchestrator | changed: [testbed-node-0] => (item=manila-share) 2026-03-31 03:49:53.552925 | orchestrator | changed: [testbed-node-2] => (item=manila-share) 2026-03-31 03:49:53.552945 | orchestrator | changed: [testbed-node-1] => (item=manila-share) 2026-03-31 03:49:53.552964 | orchestrator | 2026-03-31 03:49:53.552983 | orchestrator | TASK [manila : Copying over existing policy file] ****************************** 2026-03-31 03:49:53.552995 | orchestrator | Tuesday 31 March 2026 03:49:52 +0000 (0:00:03.752) 0:01:03.544 ********* 2026-03-31 03:49:53.553018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-31 03:49:56.955696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 03:49:56.955805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-31 03:49:56.955821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-31 03:49:56.955835 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:49:56.955864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-31 03:49:56.955956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 03:49:56.955970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-31 03:49:56.956000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-31 03:49:56.956012 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:49:56.956025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-31 03:49:56.956037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 03:49:56.956055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-31 03:49:56.956074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-31 03:49:56.956086 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:49:56.956098 | orchestrator | 2026-03-31 03:49:56.956110 | orchestrator | TASK [manila : Check manila containers] **************************************** 2026-03-31 03:49:56.956123 | orchestrator | Tuesday 31 March 2026 03:49:53 +0000 (0:00:00.695) 0:01:04.239 ********* 2026-03-31 03:49:56.956144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-31 03:50:35.963222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-31 03:50:35.963416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-31 03:50:35.963511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-31 03:50:35.963538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-31 03:50:35.963558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-31 03:50:35.963605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-31 03:50:35.963630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-31 03:50:35.963650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-31 03:50:35.963682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-31 03:50:35.963710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-31 03:50:35.963731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-31 03:50:35.963750 | orchestrator | 2026-03-31 03:50:35.963769 | orchestrator | TASK [manila : Creating Manila database] *************************************** 2026-03-31 03:50:35.963790 | orchestrator | Tuesday 31 March 2026 03:49:57 +0000 (0:00:03.399) 0:01:07.639 ********* 2026-03-31 03:50:35.963808 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:50:35.963826 | orchestrator | 2026-03-31 03:50:35.963844 | orchestrator | TASK [manila : Creating Manila database user and setting permissions] ********** 2026-03-31 03:50:35.963861 | orchestrator | Tuesday 31 March 2026 03:49:59 +0000 (0:00:02.035) 0:01:09.674 ********* 2026-03-31 03:50:35.963908 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:50:35.963926 | orchestrator | 2026-03-31 03:50:35.963942 | orchestrator | TASK [manila : Running Manila bootstrap container] ***************************** 2026-03-31 03:50:35.963958 | orchestrator | Tuesday 31 March 2026 03:50:01 +0000 (0:00:02.131) 0:01:11.806 ********* 2026-03-31 03:50:35.963974 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:50:35.963992 | orchestrator | 2026-03-31 03:50:35.964012 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-03-31 03:50:35.964031 | orchestrator | Tuesday 31 March 2026 03:50:35 +0000 (0:00:34.493) 0:01:46.300 ********* 2026-03-31 03:50:35.964049 | orchestrator | 2026-03-31 03:50:35.964080 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-03-31 03:51:26.109611 | orchestrator | Tuesday 31 March 2026 03:50:35 +0000 (0:00:00.075) 0:01:46.375 ********* 2026-03-31 03:51:26.109715 | orchestrator | 2026-03-31 03:51:26.109731 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-03-31 03:51:26.109759 | orchestrator | Tuesday 31 March 2026 03:50:35 +0000 (0:00:00.073) 0:01:46.449 ********* 2026-03-31 03:51:26.109768 | orchestrator | 2026-03-31 03:51:26.109783 | orchestrator | RUNNING HANDLER [manila : Restart manila-api container] ************************ 2026-03-31 03:51:26.109790 | orchestrator | Tuesday 31 March 2026 03:50:35 +0000 (0:00:00.073) 0:01:46.522 ********* 2026-03-31 03:51:26.109797 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:51:26.109804 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:51:26.109811 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:51:26.109951 | orchestrator | 2026-03-31 03:51:26.109967 | orchestrator | RUNNING HANDLER [manila : Restart manila-data container] *********************** 2026-03-31 03:51:26.109979 | orchestrator | Tuesday 31 March 2026 03:50:50 +0000 (0:00:14.516) 0:02:01.039 ********* 2026-03-31 03:51:26.109989 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:51:26.109998 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:51:26.110008 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:51:26.110078 | orchestrator | 2026-03-31 03:51:26.110089 | orchestrator | RUNNING HANDLER [manila : Restart manila-scheduler container] ****************** 2026-03-31 03:51:26.110095 | orchestrator | Tuesday 31 March 2026 03:50:56 +0000 (0:00:06.251) 0:02:07.290 ********* 2026-03-31 03:51:26.110101 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:51:26.110108 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:51:26.110114 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:51:26.110120 | orchestrator | 2026-03-31 03:51:26.110126 | orchestrator | RUNNING HANDLER [manila : Restart manila-share container] ********************** 2026-03-31 03:51:26.110133 | orchestrator | Tuesday 31 March 2026 03:51:07 +0000 (0:00:10.453) 0:02:17.744 ********* 2026-03-31 03:51:26.110139 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:51:26.110145 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:51:26.110151 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:51:26.110158 | orchestrator | 2026-03-31 03:51:26.110165 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 03:51:26.110173 | orchestrator | testbed-node-0 : ok=28  changed=20  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-31 03:51:26.110183 | orchestrator | testbed-node-1 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-31 03:51:26.110190 | orchestrator | testbed-node-2 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-31 03:51:26.110197 | orchestrator | 2026-03-31 03:51:26.110204 | orchestrator | 2026-03-31 03:51:26.110211 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 03:51:26.110218 | orchestrator | Tuesday 31 March 2026 03:51:25 +0000 (0:00:18.428) 0:02:36.172 ********* 2026-03-31 03:51:26.110225 | orchestrator | =============================================================================== 2026-03-31 03:51:26.110245 | orchestrator | manila : Running Manila bootstrap container ---------------------------- 34.49s 2026-03-31 03:51:26.110253 | orchestrator | manila : Restart manila-share container -------------------------------- 18.43s 2026-03-31 03:51:26.110260 | orchestrator | manila : Restart manila-api container ---------------------------------- 14.52s 2026-03-31 03:51:26.110267 | orchestrator | service-ks-register : manila | Creating endpoints ---------------------- 11.71s 2026-03-31 03:51:26.110274 | orchestrator | manila : Restart manila-scheduler container ---------------------------- 10.45s 2026-03-31 03:51:26.110281 | orchestrator | manila : Copying over manila.conf --------------------------------------- 6.90s 2026-03-31 03:51:26.110288 | orchestrator | manila : Restart manila-data container ---------------------------------- 6.25s 2026-03-31 03:51:26.110295 | orchestrator | service-ks-register : manila | Creating services ------------------------ 6.16s 2026-03-31 03:51:26.110302 | orchestrator | manila : Copying over config.json files for services -------------------- 4.70s 2026-03-31 03:51:26.110309 | orchestrator | manila : Copying over manila-share.conf --------------------------------- 3.75s 2026-03-31 03:51:26.110316 | orchestrator | service-ks-register : manila | Creating users --------------------------- 3.70s 2026-03-31 03:51:26.110323 | orchestrator | service-cert-copy : manila | Copying over extra CA certificates --------- 3.67s 2026-03-31 03:51:26.110330 | orchestrator | service-ks-register : manila | Granting user roles ---------------------- 3.58s 2026-03-31 03:51:26.110337 | orchestrator | manila : Check manila containers ---------------------------------------- 3.40s 2026-03-31 03:51:26.110344 | orchestrator | service-ks-register : manila | Creating projects ------------------------ 3.16s 2026-03-31 03:51:26.110359 | orchestrator | service-ks-register : manila | Creating roles --------------------------- 3.05s 2026-03-31 03:51:26.110366 | orchestrator | manila : Ensuring config directories exist ------------------------------ 2.26s 2026-03-31 03:51:26.110375 | orchestrator | manila : Creating Manila database user and setting permissions ---------- 2.13s 2026-03-31 03:51:26.110385 | orchestrator | manila : Creating Manila database --------------------------------------- 2.04s 2026-03-31 03:51:26.110396 | orchestrator | manila : Copy over multiple ceph configs for Manila --------------------- 1.93s 2026-03-31 03:51:26.530670 | orchestrator | + sh -c /opt/configuration/scripts/deploy/400-monitoring.sh 2026-03-31 03:51:38.776820 | orchestrator | 2026-03-31 03:51:38 | INFO  | Task 83f91492-8976-4997-90b1-53b5fc85f836 (netdata) was prepared for execution. 2026-03-31 03:51:38.776911 | orchestrator | 2026-03-31 03:51:38 | INFO  | It takes a moment until task 83f91492-8976-4997-90b1-53b5fc85f836 (netdata) has been started and output is visible here. 2026-03-31 03:53:18.255687 | orchestrator | 2026-03-31 03:53:18.255788 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-31 03:53:18.255799 | orchestrator | 2026-03-31 03:53:18.255807 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-31 03:53:18.255815 | orchestrator | Tuesday 31 March 2026 03:51:43 +0000 (0:00:00.257) 0:00:00.257 ********* 2026-03-31 03:53:18.255823 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-03-31 03:53:18.255832 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-03-31 03:53:18.255839 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-03-31 03:53:18.255846 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-03-31 03:53:18.255853 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-03-31 03:53:18.255860 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-03-31 03:53:18.255867 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-03-31 03:53:18.255874 | orchestrator | 2026-03-31 03:53:18.255881 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-03-31 03:53:18.255888 | orchestrator | 2026-03-31 03:53:18.255895 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-03-31 03:53:18.255902 | orchestrator | Tuesday 31 March 2026 03:51:44 +0000 (0:00:00.920) 0:00:01.178 ********* 2026-03-31 03:53:18.255911 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 03:53:18.255919 | orchestrator | 2026-03-31 03:53:18.255927 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-03-31 03:53:18.255934 | orchestrator | Tuesday 31 March 2026 03:51:46 +0000 (0:00:01.536) 0:00:02.715 ********* 2026-03-31 03:53:18.255942 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:53:18.255950 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:53:18.255956 | orchestrator | ok: [testbed-manager] 2026-03-31 03:53:18.255963 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:53:18.255970 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:53:18.255977 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:53:18.255985 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:53:18.255992 | orchestrator | 2026-03-31 03:53:18.255999 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-03-31 03:53:18.256006 | orchestrator | Tuesday 31 March 2026 03:51:48 +0000 (0:00:02.043) 0:00:04.758 ********* 2026-03-31 03:53:18.256013 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:53:18.256021 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:53:18.256028 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:53:18.256035 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:53:18.256042 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:53:18.256049 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:53:18.256056 | orchestrator | ok: [testbed-manager] 2026-03-31 03:53:18.256083 | orchestrator | 2026-03-31 03:53:18.256091 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-03-31 03:53:18.256112 | orchestrator | Tuesday 31 March 2026 03:51:50 +0000 (0:00:02.589) 0:00:07.348 ********* 2026-03-31 03:53:18.256119 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:53:18.256126 | orchestrator | changed: [testbed-manager] 2026-03-31 03:53:18.256133 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:53:18.256140 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:53:18.256147 | orchestrator | changed: [testbed-node-3] 2026-03-31 03:53:18.256153 | orchestrator | changed: [testbed-node-4] 2026-03-31 03:53:18.256159 | orchestrator | changed: [testbed-node-5] 2026-03-31 03:53:18.256166 | orchestrator | 2026-03-31 03:53:18.256173 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-03-31 03:53:18.256180 | orchestrator | Tuesday 31 March 2026 03:51:52 +0000 (0:00:01.552) 0:00:08.901 ********* 2026-03-31 03:53:18.256187 | orchestrator | changed: [testbed-manager] 2026-03-31 03:53:18.256194 | orchestrator | changed: [testbed-node-5] 2026-03-31 03:53:18.256200 | orchestrator | changed: [testbed-node-4] 2026-03-31 03:53:18.256206 | orchestrator | changed: [testbed-node-3] 2026-03-31 03:53:18.256213 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:53:18.256219 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:53:18.256226 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:53:18.256231 | orchestrator | 2026-03-31 03:53:18.256238 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-03-31 03:53:18.256245 | orchestrator | Tuesday 31 March 2026 03:52:07 +0000 (0:00:15.571) 0:00:24.472 ********* 2026-03-31 03:53:18.256252 | orchestrator | changed: [testbed-node-3] 2026-03-31 03:53:18.256260 | orchestrator | changed: [testbed-node-5] 2026-03-31 03:53:18.256267 | orchestrator | changed: [testbed-node-4] 2026-03-31 03:53:18.256275 | orchestrator | changed: [testbed-manager] 2026-03-31 03:53:18.256283 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:53:18.256290 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:53:18.256298 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:53:18.256305 | orchestrator | 2026-03-31 03:53:18.256313 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-03-31 03:53:18.256321 | orchestrator | Tuesday 31 March 2026 03:52:49 +0000 (0:00:42.116) 0:01:06.589 ********* 2026-03-31 03:53:18.256330 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 03:53:18.256339 | orchestrator | 2026-03-31 03:53:18.256347 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-03-31 03:53:18.256376 | orchestrator | Tuesday 31 March 2026 03:52:51 +0000 (0:00:01.624) 0:01:08.213 ********* 2026-03-31 03:53:18.256382 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-03-31 03:53:18.256388 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-03-31 03:53:18.256394 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-03-31 03:53:18.256400 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-03-31 03:53:18.256421 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-03-31 03:53:18.256430 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-03-31 03:53:18.256437 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-03-31 03:53:18.256444 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-03-31 03:53:18.256451 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-03-31 03:53:18.256457 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-03-31 03:53:18.256462 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-03-31 03:53:18.256468 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-03-31 03:53:18.256474 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-03-31 03:53:18.256480 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-03-31 03:53:18.256494 | orchestrator | 2026-03-31 03:53:18.256502 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-03-31 03:53:18.256510 | orchestrator | Tuesday 31 March 2026 03:52:55 +0000 (0:00:03.594) 0:01:11.808 ********* 2026-03-31 03:53:18.256517 | orchestrator | ok: [testbed-manager] 2026-03-31 03:53:18.256524 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:53:18.256530 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:53:18.256536 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:53:18.256542 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:53:18.256548 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:53:18.256555 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:53:18.256561 | orchestrator | 2026-03-31 03:53:18.256567 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-03-31 03:53:18.256573 | orchestrator | Tuesday 31 March 2026 03:52:56 +0000 (0:00:01.532) 0:01:13.340 ********* 2026-03-31 03:53:18.256579 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:53:18.256584 | orchestrator | changed: [testbed-manager] 2026-03-31 03:53:18.256591 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:53:18.256596 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:53:18.256603 | orchestrator | changed: [testbed-node-3] 2026-03-31 03:53:18.256609 | orchestrator | changed: [testbed-node-4] 2026-03-31 03:53:18.256615 | orchestrator | changed: [testbed-node-5] 2026-03-31 03:53:18.256621 | orchestrator | 2026-03-31 03:53:18.256627 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-03-31 03:53:18.256633 | orchestrator | Tuesday 31 March 2026 03:52:58 +0000 (0:00:01.547) 0:01:14.888 ********* 2026-03-31 03:53:18.256639 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:53:18.256645 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:53:18.256651 | orchestrator | ok: [testbed-manager] 2026-03-31 03:53:18.256656 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:53:18.256663 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:53:18.256668 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:53:18.256674 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:53:18.256680 | orchestrator | 2026-03-31 03:53:18.256686 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-03-31 03:53:18.256692 | orchestrator | Tuesday 31 March 2026 03:52:59 +0000 (0:00:01.309) 0:01:16.198 ********* 2026-03-31 03:53:18.256698 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:53:18.256703 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:53:18.256709 | orchestrator | ok: [testbed-manager] 2026-03-31 03:53:18.256715 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:53:18.256720 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:53:18.256726 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:53:18.256738 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:53:18.256744 | orchestrator | 2026-03-31 03:53:18.256749 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-03-31 03:53:18.256755 | orchestrator | Tuesday 31 March 2026 03:53:01 +0000 (0:00:01.885) 0:01:18.084 ********* 2026-03-31 03:53:18.256761 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-03-31 03:53:18.256770 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 03:53:18.256776 | orchestrator | 2026-03-31 03:53:18.256782 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-03-31 03:53:18.256788 | orchestrator | Tuesday 31 March 2026 03:53:03 +0000 (0:00:01.677) 0:01:19.761 ********* 2026-03-31 03:53:18.256794 | orchestrator | changed: [testbed-manager] 2026-03-31 03:53:18.256799 | orchestrator | 2026-03-31 03:53:18.256805 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-03-31 03:53:18.256811 | orchestrator | Tuesday 31 March 2026 03:53:06 +0000 (0:00:03.664) 0:01:23.426 ********* 2026-03-31 03:53:18.256817 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:53:18.256829 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:53:18.256834 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:53:18.256840 | orchestrator | changed: [testbed-node-3] 2026-03-31 03:53:18.256846 | orchestrator | changed: [testbed-node-4] 2026-03-31 03:53:18.256852 | orchestrator | changed: [testbed-node-5] 2026-03-31 03:53:18.256859 | orchestrator | changed: [testbed-manager] 2026-03-31 03:53:18.256864 | orchestrator | 2026-03-31 03:53:18.256870 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 03:53:18.256876 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 03:53:18.256883 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 03:53:18.256889 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 03:53:18.256895 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 03:53:18.256908 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 03:53:18.999131 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 03:53:18.999239 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 03:53:18.999254 | orchestrator | 2026-03-31 03:53:18.999266 | orchestrator | 2026-03-31 03:53:18.999278 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 03:53:18.999291 | orchestrator | Tuesday 31 March 2026 03:53:18 +0000 (0:00:11.520) 0:01:34.946 ********* 2026-03-31 03:53:18.999301 | orchestrator | =============================================================================== 2026-03-31 03:53:18.999312 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 42.12s 2026-03-31 03:53:18.999323 | orchestrator | osism.services.netdata : Add repository -------------------------------- 15.57s 2026-03-31 03:53:18.999334 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.52s 2026-03-31 03:53:18.999344 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 3.66s 2026-03-31 03:53:18.999401 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 3.59s 2026-03-31 03:53:18.999412 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.59s 2026-03-31 03:53:18.999423 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.04s 2026-03-31 03:53:18.999434 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.89s 2026-03-31 03:53:18.999444 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.68s 2026-03-31 03:53:18.999455 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.62s 2026-03-31 03:53:18.999465 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.55s 2026-03-31 03:53:18.999476 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.55s 2026-03-31 03:53:18.999487 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.54s 2026-03-31 03:53:18.999497 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.53s 2026-03-31 03:53:18.999510 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.31s 2026-03-31 03:53:18.999520 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.92s 2026-03-31 03:53:21.910497 | orchestrator | 2026-03-31 03:53:21 | INFO  | Task a020e5b6-69d0-47c8-980a-b1177bf4b661 (prometheus) was prepared for execution. 2026-03-31 03:53:21.910630 | orchestrator | 2026-03-31 03:53:21 | INFO  | It takes a moment until task a020e5b6-69d0-47c8-980a-b1177bf4b661 (prometheus) has been started and output is visible here. 2026-03-31 03:53:31.622777 | orchestrator | 2026-03-31 03:53:31.622913 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-31 03:53:31.622940 | orchestrator | 2026-03-31 03:53:31.622953 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-31 03:53:31.622964 | orchestrator | Tuesday 31 March 2026 03:53:26 +0000 (0:00:00.284) 0:00:00.284 ********* 2026-03-31 03:53:31.622975 | orchestrator | ok: [testbed-manager] 2026-03-31 03:53:31.622987 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:53:31.622999 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:53:31.623010 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:53:31.623021 | orchestrator | ok: [testbed-node-3] 2026-03-31 03:53:31.623031 | orchestrator | ok: [testbed-node-4] 2026-03-31 03:53:31.623042 | orchestrator | ok: [testbed-node-5] 2026-03-31 03:53:31.623052 | orchestrator | 2026-03-31 03:53:31.623063 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-31 03:53:31.623074 | orchestrator | Tuesday 31 March 2026 03:53:27 +0000 (0:00:00.937) 0:00:01.221 ********* 2026-03-31 03:53:31.623086 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-03-31 03:53:31.623097 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-03-31 03:53:31.623108 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-03-31 03:53:31.623118 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-03-31 03:53:31.623129 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-03-31 03:53:31.623140 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-03-31 03:53:31.623150 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-03-31 03:53:31.623161 | orchestrator | 2026-03-31 03:53:31.623171 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-03-31 03:53:31.623183 | orchestrator | 2026-03-31 03:53:31.623204 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-31 03:53:31.623222 | orchestrator | Tuesday 31 March 2026 03:53:28 +0000 (0:00:00.945) 0:00:02.167 ********* 2026-03-31 03:53:31.623242 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 03:53:31.623264 | orchestrator | 2026-03-31 03:53:31.623283 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-03-31 03:53:31.623301 | orchestrator | Tuesday 31 March 2026 03:53:29 +0000 (0:00:01.448) 0:00:03.615 ********* 2026-03-31 03:53:31.623351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-31 03:53:31.623375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-31 03:53:31.623397 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-31 03:53:31.623451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 03:53:31.623517 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-31 03:53:31.623540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 03:53:31.623560 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-31 03:53:31.623582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-31 03:53:31.623602 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-31 03:53:31.623623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 03:53:31.623658 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-31 03:53:31.623694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 03:53:32.673084 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-31 03:53:32.673187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 03:53:32.673203 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-31 03:53:32.673215 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-31 03:53:32.673229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-31 03:53:32.673270 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-31 03:53:32.673372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-31 03:53:32.673387 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-31 03:53:32.673399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 03:53:32.673412 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-31 03:53:32.673423 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-31 03:53:32.673443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 03:53:32.673454 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 03:53:32.673466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 03:53:32.673492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-31 03:53:37.938660 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-31 03:53:37.938784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 03:53:37.938800 | orchestrator | 2026-03-31 03:53:37.938811 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-31 03:53:37.938822 | orchestrator | Tuesday 31 March 2026 03:53:32 +0000 (0:00:02.976) 0:00:06.591 ********* 2026-03-31 03:53:37.938831 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 03:53:37.938840 | orchestrator | 2026-03-31 03:53:37.938848 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-03-31 03:53:37.938856 | orchestrator | Tuesday 31 March 2026 03:53:34 +0000 (0:00:01.641) 0:00:08.233 ********* 2026-03-31 03:53:37.938888 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-31 03:53:37.938898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-31 03:53:37.938907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-31 03:53:37.938928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-31 03:53:37.938956 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-31 03:53:37.938964 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-31 03:53:37.938973 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-31 03:53:37.938989 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-31 03:53:37.938997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 03:53:37.939006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 03:53:37.939014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 03:53:37.939027 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-31 03:53:37.939045 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-31 03:53:39.947001 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-31 03:53:39.947096 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-31 03:53:39.947132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 03:53:39.947142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 03:53:39.947151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 03:53:39.947173 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-31 03:53:39.947184 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-31 03:53:39.947211 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-31 03:53:39.947229 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-31 03:53:39.947254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-31 03:53:39.947269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-31 03:53:39.947498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-31 03:53:39.947521 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 03:53:39.947532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 03:53:39.947555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 03:53:41.100601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 03:53:41.100686 | orchestrator | 2026-03-31 03:53:41.100697 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-03-31 03:53:41.100706 | orchestrator | Tuesday 31 March 2026 03:53:39 +0000 (0:00:05.632) 0:00:13.866 ********* 2026-03-31 03:53:41.100715 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-31 03:53:41.100724 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-31 03:53:41.100732 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-31 03:53:41.100834 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-31 03:53:41.100863 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 03:53:41.100884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-31 03:53:41.100892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-31 03:53:41.100899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 03:53:41.100906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 03:53:41.100913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 03:53:41.100923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-31 03:53:41.100930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 03:53:41.100947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 03:53:41.779425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-31 03:53:41.779541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 03:53:41.779560 | orchestrator | skipping: [testbed-manager] 2026-03-31 03:53:41.779575 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:53:41.779610 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:53:41.779624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-31 03:53:41.779634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 03:53:41.779658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 03:53:41.779667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-31 03:53:41.779693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 03:53:41.779701 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:53:41.779729 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-31 03:53:41.779739 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-31 03:53:41.779747 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-31 03:53:41.779754 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:53:41.779762 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-31 03:53:41.779769 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-31 03:53:41.779780 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-31 03:53:41.779794 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:53:41.779801 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-31 03:53:41.779816 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-31 03:53:42.688024 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-31 03:53:42.688148 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:53:42.688176 | orchestrator | 2026-03-31 03:53:42.688198 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-03-31 03:53:42.688219 | orchestrator | Tuesday 31 March 2026 03:53:41 +0000 (0:00:01.828) 0:00:15.695 ********* 2026-03-31 03:53:42.688240 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-31 03:53:42.688322 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-31 03:53:42.688343 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-31 03:53:42.688412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-31 03:53:42.688464 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-31 03:53:42.688486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 03:53:42.688507 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 03:53:42.688542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 03:53:42.688564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-31 03:53:42.688592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 03:53:42.688624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-31 03:53:42.688645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 03:53:42.688677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 03:53:43.969846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-31 03:53:43.969978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 03:53:43.970008 | orchestrator | skipping: [testbed-manager] 2026-03-31 03:53:43.970147 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:53:43.970170 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:53:43.970191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-31 03:53:43.970212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 03:53:43.970438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 03:53:43.970471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-31 03:53:43.970491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 03:53:43.970511 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:53:43.970558 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-31 03:53:43.970578 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-31 03:53:43.970596 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-31 03:53:43.970614 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:53:43.970633 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-31 03:53:43.970677 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-31 03:53:43.970697 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-31 03:53:43.970716 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:53:43.970735 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-31 03:53:43.970770 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-31 03:53:48.177826 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-31 03:53:48.177976 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:53:48.178005 | orchestrator | 2026-03-31 03:53:48.178100 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-03-31 03:53:48.178123 | orchestrator | Tuesday 31 March 2026 03:53:43 +0000 (0:00:02.184) 0:00:17.879 ********* 2026-03-31 03:53:48.178143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-31 03:53:48.178202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-31 03:53:48.178336 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-31 03:53:48.178366 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-31 03:53:48.178387 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-31 03:53:48.178433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-31 03:53:48.178455 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-31 03:53:48.178473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 03:53:48.178508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 03:53:48.178529 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-31 03:53:48.178559 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-31 03:53:48.178578 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-31 03:53:48.178599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 03:53:48.178632 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-31 03:53:50.937230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 03:53:50.937421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 03:53:50.937469 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-31 03:53:50.937487 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-31 03:53:50.937520 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-31 03:53:50.937535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 03:53:50.937550 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-31 03:53:50.937585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-31 03:53:50.937599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-31 03:53:50.937622 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-31 03:53:50.937641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-31 03:53:50.937654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 03:53:50.937667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 03:53:50.937680 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 03:53:50.937701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 03:53:55.000736 | orchestrator | 2026-03-31 03:53:55.000846 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-03-31 03:53:55.000867 | orchestrator | Tuesday 31 March 2026 03:53:50 +0000 (0:00:06.955) 0:00:24.835 ********* 2026-03-31 03:53:55.000887 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-31 03:53:55.000908 | orchestrator | 2026-03-31 03:53:55.000928 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-03-31 03:53:55.000948 | orchestrator | Tuesday 31 March 2026 03:53:51 +0000 (0:00:00.947) 0:00:25.782 ********* 2026-03-31 03:53:55.000969 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1319312, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.370314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:53:55.000987 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1319312, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.370314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:53:55.001016 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1319312, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.370314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-31 03:53:55.001030 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1319312, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.370314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:53:55.001043 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1319338, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3751228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:53:55.001055 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1319312, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.370314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:53:55.001110 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1319312, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.370314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:53:55.001123 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1319312, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.370314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:53:55.001134 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1319338, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3751228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:53:55.001152 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1319338, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3751228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:53:55.001164 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1319338, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3751228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:53:55.001175 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1319308, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3691285, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:53:55.001202 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1319338, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3751228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:53:55.001297 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1319338, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3751228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:53:56.959515 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1319308, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3691285, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:53:56.959637 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1319308, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3691285, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:53:56.959671 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1319308, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3691285, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:53:56.959685 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1319324, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3731287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:53:56.959696 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1319308, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3691285, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:53:56.959732 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1319338, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3751228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-31 03:53:56.959745 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1319308, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3691285, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:53:56.959775 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1319304, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3670235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:53:56.959787 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1319324, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3731287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:53:56.959809 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1319324, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3731287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:53:56.959829 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1319324, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3731287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:53:56.959847 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1319324, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3731287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:53:56.959878 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1319314, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.370542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:53:56.959898 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1319324, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3731287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:53:56.959928 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1319322, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3720205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:53:58.942674 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1319304, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3670235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:53:58.942819 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1319304, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3670235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:53:58.942845 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1319304, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3670235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:53:58.942864 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1319304, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3670235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:53:58.942917 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1319304, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3670235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:53:58.942935 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1319314, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.370542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:53:58.942952 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1319314, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.370542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:53:58.942991 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1319316, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.370799, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:53:58.943019 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1319314, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.370542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:53:58.943037 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1319308, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3691285, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-31 03:53:58.943053 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1319314, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.370542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:53:58.943081 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1319322, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3720205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:53:58.943098 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1319314, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.370542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:53:58.943113 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1319322, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3720205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:53:58.943141 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1319316, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.370799, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:00.554445 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1319310, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3691285, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:00.555042 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1319322, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3720205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:00.555097 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1319322, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3720205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:00.555109 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1319322, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3720205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:00.555118 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1319316, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.370799, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:00.555128 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1319324, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3731287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-31 03:54:00.555137 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1319310, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3691285, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:00.555164 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319336, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3747633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:00.555179 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1319316, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.370799, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:00.555194 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1319316, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.370799, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:00.555245 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1319316, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.370799, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:00.555254 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1319310, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3691285, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:00.555263 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1319310, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3691285, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:00.555273 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1319310, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3691285, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:00.555289 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319336, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3747633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:01.874713 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319299, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3658667, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:01.874821 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319336, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3747633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:01.874838 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1319310, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3691285, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:01.874850 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319299, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3658667, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:01.874861 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1319351, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3774555, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:01.874873 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319336, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3747633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:01.874885 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319336, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3747633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:01.874918 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319336, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3747633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:01.874938 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319299, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3658667, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:01.874949 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319299, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3658667, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:01.874961 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1319351, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3774555, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:01.874972 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1319304, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3670235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-31 03:54:01.874983 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1319331, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3744414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:01.874995 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1319351, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3774555, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:01.875024 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319299, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3658667, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:03.443283 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319299, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3658667, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:03.443343 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1319351, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3774555, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:03.443351 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319305, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3674543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:03.443357 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1319331, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3744414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:03.443363 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1319301, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3662894, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:03.443368 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1319351, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3774555, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:03.443395 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1319331, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3744414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:03.443410 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319305, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3674543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:03.443416 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1319331, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3744414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:03.443421 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1319320, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.371827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:03.443426 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319305, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3674543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:03.443431 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1319301, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3662894, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:03.443436 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1319331, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3744414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:03.443448 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1319351, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3774555, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:03.443458 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319305, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3674543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:04.773951 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1319320, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.371827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:04.774064 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1319301, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3662894, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:04.774077 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1319318, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3713517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:04.774086 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319305, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3674543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:04.774128 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1319318, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3713517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:04.774147 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1319331, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3744414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:04.774157 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1319301, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3662894, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:04.774178 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1319349, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.37644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:04.774233 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:54:04.774250 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1319314, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.370542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-31 03:54:04.774265 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319305, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3674543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:04.774280 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1319320, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.371827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:04.774296 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1319301, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3662894, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:04.774308 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1319349, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.37644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:04.774317 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:54:04.774325 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1319320, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.371827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:04.774341 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1319318, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3713517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:13.026846 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1319301, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3662894, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:13.026993 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1319320, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.371827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:13.027005 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1319349, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.37644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:13.027037 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:54:13.027044 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1319320, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.371827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:13.027064 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1319318, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3713517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:13.027070 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1319318, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3713517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:13.027075 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1319318, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3713517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:13.027094 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1319349, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.37644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:13.027099 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:54:13.027104 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1319349, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.37644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:13.027115 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:54:13.027120 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1319322, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3720205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-31 03:54:13.027125 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1319349, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.37644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-31 03:54:13.027130 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:54:13.027139 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1319316, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.370799, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-31 03:54:13.027144 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1319310, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3691285, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-31 03:54:13.027178 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319336, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3747633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-31 03:54:40.843004 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319299, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3658667, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-31 03:54:40.843252 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1319351, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3774555, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-31 03:54:40.843300 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1319331, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3744414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-31 03:54:40.843314 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319305, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3674543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-31 03:54:40.843340 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1319301, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3662894, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-31 03:54:40.843352 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1319320, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.371827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-31 03:54:40.843365 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1319318, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3713517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-31 03:54:40.843398 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1319349, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.37644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-31 03:54:40.843411 | orchestrator | 2026-03-31 03:54:40.843434 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-03-31 03:54:40.843447 | orchestrator | Tuesday 31 March 2026 03:54:20 +0000 (0:00:28.280) 0:00:54.062 ********* 2026-03-31 03:54:40.843458 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-31 03:54:40.843470 | orchestrator | 2026-03-31 03:54:40.843481 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-03-31 03:54:40.843492 | orchestrator | Tuesday 31 March 2026 03:54:20 +0000 (0:00:00.812) 0:00:54.875 ********* 2026-03-31 03:54:40.843503 | orchestrator | [WARNING]: Skipped 2026-03-31 03:54:40.843516 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-31 03:54:40.843527 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-03-31 03:54:40.843538 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-31 03:54:40.843548 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-03-31 03:54:40.843566 | orchestrator | [WARNING]: Skipped 2026-03-31 03:54:40.843586 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-31 03:54:40.843606 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-03-31 03:54:40.843624 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-31 03:54:40.843644 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-03-31 03:54:40.843665 | orchestrator | [WARNING]: Skipped 2026-03-31 03:54:40.843685 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-31 03:54:40.843705 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-03-31 03:54:40.843726 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-31 03:54:40.843746 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-03-31 03:54:40.843763 | orchestrator | [WARNING]: Skipped 2026-03-31 03:54:40.843775 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-31 03:54:40.843785 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-03-31 03:54:40.843796 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-31 03:54:40.843806 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-03-31 03:54:40.843817 | orchestrator | [WARNING]: Skipped 2026-03-31 03:54:40.843828 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-31 03:54:40.843838 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-03-31 03:54:40.843849 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-31 03:54:40.843860 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-03-31 03:54:40.843877 | orchestrator | [WARNING]: Skipped 2026-03-31 03:54:40.843888 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-31 03:54:40.843899 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-03-31 03:54:40.843910 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-31 03:54:40.843929 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-03-31 03:54:40.843947 | orchestrator | [WARNING]: Skipped 2026-03-31 03:54:40.843966 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-31 03:54:40.843984 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-03-31 03:54:40.844001 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-31 03:54:40.844020 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-03-31 03:54:40.844038 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-31 03:54:40.844085 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-31 03:54:40.844104 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-31 03:54:40.844123 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-31 03:54:40.844156 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-31 03:54:40.844176 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-31 03:54:40.844195 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-31 03:54:40.844214 | orchestrator | 2026-03-31 03:54:40.844232 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-03-31 03:54:40.844251 | orchestrator | Tuesday 31 March 2026 03:54:22 +0000 (0:00:01.914) 0:00:56.789 ********* 2026-03-31 03:54:40.844270 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-31 03:54:40.844291 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:54:40.844311 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-31 03:54:40.844328 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:54:40.844347 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-31 03:54:40.844366 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:54:40.844400 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-31 03:54:58.705483 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:54:58.705583 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-31 03:54:58.705599 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:54:58.705610 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-31 03:54:58.705620 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:54:58.705629 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-03-31 03:54:58.705639 | orchestrator | 2026-03-31 03:54:58.705650 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-03-31 03:54:58.705661 | orchestrator | Tuesday 31 March 2026 03:54:40 +0000 (0:00:17.968) 0:01:14.758 ********* 2026-03-31 03:54:58.705671 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-31 03:54:58.705682 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:54:58.705692 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-31 03:54:58.705703 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:54:58.705712 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-31 03:54:58.705719 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:54:58.705725 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-31 03:54:58.705731 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:54:58.705736 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-31 03:54:58.705743 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:54:58.705748 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-31 03:54:58.705754 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:54:58.705760 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-03-31 03:54:58.705766 | orchestrator | 2026-03-31 03:54:58.705772 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-03-31 03:54:58.705778 | orchestrator | Tuesday 31 March 2026 03:54:43 +0000 (0:00:02.842) 0:01:17.601 ********* 2026-03-31 03:54:58.705784 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-31 03:54:58.705791 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:54:58.705797 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-31 03:54:58.705823 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:54:58.705829 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-31 03:54:58.705835 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:54:58.705841 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-31 03:54:58.705846 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:54:58.705865 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-31 03:54:58.705871 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:54:58.705876 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-03-31 03:54:58.705883 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-31 03:54:58.705888 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:54:58.705894 | orchestrator | 2026-03-31 03:54:58.705900 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-03-31 03:54:58.705905 | orchestrator | Tuesday 31 March 2026 03:54:45 +0000 (0:00:01.990) 0:01:19.592 ********* 2026-03-31 03:54:58.705911 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-31 03:54:58.705917 | orchestrator | 2026-03-31 03:54:58.705923 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-03-31 03:54:58.705929 | orchestrator | Tuesday 31 March 2026 03:54:46 +0000 (0:00:00.770) 0:01:20.363 ********* 2026-03-31 03:54:58.705935 | orchestrator | skipping: [testbed-manager] 2026-03-31 03:54:58.705940 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:54:58.705946 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:54:58.705952 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:54:58.705957 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:54:58.705963 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:54:58.705968 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:54:58.705974 | orchestrator | 2026-03-31 03:54:58.705980 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-03-31 03:54:58.705985 | orchestrator | Tuesday 31 March 2026 03:54:47 +0000 (0:00:00.822) 0:01:21.185 ********* 2026-03-31 03:54:58.705991 | orchestrator | skipping: [testbed-manager] 2026-03-31 03:54:58.706066 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:54:58.706073 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:54:58.706079 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:54:58.706086 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:54:58.706092 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:54:58.706099 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:54:58.706105 | orchestrator | 2026-03-31 03:54:58.706112 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-03-31 03:54:58.706133 | orchestrator | Tuesday 31 March 2026 03:54:49 +0000 (0:00:02.245) 0:01:23.430 ********* 2026-03-31 03:54:58.706141 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-31 03:54:58.706148 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-31 03:54:58.706155 | orchestrator | skipping: [testbed-manager] 2026-03-31 03:54:58.706161 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-31 03:54:58.706168 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-31 03:54:58.706174 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-31 03:54:58.706181 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:54:58.706188 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:54:58.706194 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:54:58.706206 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:54:58.706213 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-31 03:54:58.706220 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:54:58.706227 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-31 03:54:58.706234 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:54:58.706240 | orchestrator | 2026-03-31 03:54:58.706247 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-03-31 03:54:58.706253 | orchestrator | Tuesday 31 March 2026 03:54:51 +0000 (0:00:01.644) 0:01:25.075 ********* 2026-03-31 03:54:58.706260 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-31 03:54:58.706267 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-31 03:54:58.706273 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:54:58.706280 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:54:58.706287 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-31 03:54:58.706293 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:54:58.706300 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-31 03:54:58.706306 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:54:58.706313 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-31 03:54:58.706320 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:54:58.706326 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-31 03:54:58.706332 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:54:58.706339 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-03-31 03:54:58.706346 | orchestrator | 2026-03-31 03:54:58.706352 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-03-31 03:54:58.706359 | orchestrator | Tuesday 31 March 2026 03:54:52 +0000 (0:00:01.819) 0:01:26.894 ********* 2026-03-31 03:54:58.706370 | orchestrator | [WARNING]: Skipped 2026-03-31 03:54:58.706378 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-03-31 03:54:58.706385 | orchestrator | due to this access issue: 2026-03-31 03:54:58.706391 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-03-31 03:54:58.706398 | orchestrator | not a directory 2026-03-31 03:54:58.706403 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-31 03:54:58.706409 | orchestrator | 2026-03-31 03:54:58.706415 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-03-31 03:54:58.706421 | orchestrator | Tuesday 31 March 2026 03:54:54 +0000 (0:00:01.237) 0:01:28.132 ********* 2026-03-31 03:54:58.706427 | orchestrator | skipping: [testbed-manager] 2026-03-31 03:54:58.706433 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:54:58.706438 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:54:58.706444 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:54:58.706450 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:54:58.706456 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:54:58.706461 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:54:58.706467 | orchestrator | 2026-03-31 03:54:58.706473 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-03-31 03:54:58.706479 | orchestrator | Tuesday 31 March 2026 03:54:55 +0000 (0:00:01.015) 0:01:29.147 ********* 2026-03-31 03:54:58.706484 | orchestrator | skipping: [testbed-manager] 2026-03-31 03:54:58.706490 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:54:58.706496 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:54:58.706502 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:54:58.706512 | orchestrator | skipping: [testbed-node-3] 2026-03-31 03:54:58.706518 | orchestrator | skipping: [testbed-node-4] 2026-03-31 03:54:58.706524 | orchestrator | skipping: [testbed-node-5] 2026-03-31 03:54:58.706529 | orchestrator | 2026-03-31 03:54:58.706535 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-03-31 03:54:58.706541 | orchestrator | Tuesday 31 March 2026 03:54:56 +0000 (0:00:00.956) 0:01:30.104 ********* 2026-03-31 03:54:58.706555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-31 03:55:00.414308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-31 03:55:00.414391 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-31 03:55:00.414402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-31 03:55:00.414422 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-31 03:55:00.414429 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-31 03:55:00.414459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 03:55:00.414467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 03:55:00.414488 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-31 03:55:00.414496 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-31 03:55:00.414504 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-31 03:55:00.414511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 03:55:00.414522 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-31 03:55:00.414530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 03:55:00.414544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 03:55:00.414551 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-31 03:55:00.414564 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-31 03:55:02.526656 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-31 03:55:02.526765 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-31 03:55:02.526781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 03:55:02.526814 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-31 03:55:02.526851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-31 03:55:02.526866 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-31 03:55:02.526895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-31 03:55:02.526908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-31 03:55:02.526920 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 03:55:02.526936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 03:55:02.526948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 03:55:02.526967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 03:55:02.527053 | orchestrator | 2026-03-31 03:55:02.527071 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-03-31 03:55:02.527084 | orchestrator | Tuesday 31 March 2026 03:55:00 +0000 (0:00:04.238) 0:01:34.343 ********* 2026-03-31 03:55:02.527096 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-31 03:55:02.527107 | orchestrator | skipping: [testbed-manager] 2026-03-31 03:55:02.527119 | orchestrator | 2026-03-31 03:55:02.527130 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-31 03:55:02.527141 | orchestrator | Tuesday 31 March 2026 03:55:01 +0000 (0:00:01.317) 0:01:35.660 ********* 2026-03-31 03:55:02.527152 | orchestrator | 2026-03-31 03:55:02.527163 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-31 03:55:02.527177 | orchestrator | Tuesday 31 March 2026 03:55:02 +0000 (0:00:00.283) 0:01:35.943 ********* 2026-03-31 03:55:02.527189 | orchestrator | 2026-03-31 03:55:02.527202 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-31 03:55:02.527215 | orchestrator | Tuesday 31 March 2026 03:55:02 +0000 (0:00:00.084) 0:01:36.028 ********* 2026-03-31 03:55:02.527227 | orchestrator | 2026-03-31 03:55:02.527240 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-31 03:55:02.527252 | orchestrator | Tuesday 31 March 2026 03:55:02 +0000 (0:00:00.074) 0:01:36.102 ********* 2026-03-31 03:55:02.527265 | orchestrator | 2026-03-31 03:55:02.527278 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-31 03:55:02.527290 | orchestrator | Tuesday 31 March 2026 03:55:02 +0000 (0:00:00.067) 0:01:36.170 ********* 2026-03-31 03:55:02.527302 | orchestrator | 2026-03-31 03:55:02.527314 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-31 03:55:02.527327 | orchestrator | Tuesday 31 March 2026 03:55:02 +0000 (0:00:00.073) 0:01:36.244 ********* 2026-03-31 03:55:02.527340 | orchestrator | 2026-03-31 03:55:02.527352 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-31 03:55:02.527374 | orchestrator | Tuesday 31 March 2026 03:55:02 +0000 (0:00:00.079) 0:01:36.323 ********* 2026-03-31 03:56:53.937142 | orchestrator | 2026-03-31 03:56:53.937238 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-03-31 03:56:53.937251 | orchestrator | Tuesday 31 March 2026 03:55:02 +0000 (0:00:00.098) 0:01:36.422 ********* 2026-03-31 03:56:53.937260 | orchestrator | changed: [testbed-manager] 2026-03-31 03:56:53.937268 | orchestrator | 2026-03-31 03:56:53.937276 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-03-31 03:56:53.937283 | orchestrator | Tuesday 31 March 2026 03:55:30 +0000 (0:00:27.609) 0:02:04.031 ********* 2026-03-31 03:56:53.937291 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:56:53.937298 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:56:53.937306 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:56:53.937313 | orchestrator | changed: [testbed-node-4] 2026-03-31 03:56:53.937340 | orchestrator | changed: [testbed-manager] 2026-03-31 03:56:53.937348 | orchestrator | changed: [testbed-node-3] 2026-03-31 03:56:53.937355 | orchestrator | changed: [testbed-node-5] 2026-03-31 03:56:53.937362 | orchestrator | 2026-03-31 03:56:53.937370 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-03-31 03:56:53.937378 | orchestrator | Tuesday 31 March 2026 03:55:44 +0000 (0:00:14.241) 0:02:18.273 ********* 2026-03-31 03:56:53.937391 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:56:53.937404 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:56:53.937416 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:56:53.937429 | orchestrator | 2026-03-31 03:56:53.937441 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-03-31 03:56:53.937453 | orchestrator | Tuesday 31 March 2026 03:55:55 +0000 (0:00:10.867) 0:02:29.140 ********* 2026-03-31 03:56:53.937464 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:56:53.937477 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:56:53.937490 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:56:53.937503 | orchestrator | 2026-03-31 03:56:53.937515 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-03-31 03:56:53.937528 | orchestrator | Tuesday 31 March 2026 03:56:06 +0000 (0:00:11.327) 0:02:40.467 ********* 2026-03-31 03:56:53.937541 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:56:53.937554 | orchestrator | changed: [testbed-manager] 2026-03-31 03:56:53.937566 | orchestrator | changed: [testbed-node-4] 2026-03-31 03:56:53.937577 | orchestrator | changed: [testbed-node-5] 2026-03-31 03:56:53.937588 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:56:53.937600 | orchestrator | changed: [testbed-node-3] 2026-03-31 03:56:53.937611 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:56:53.937623 | orchestrator | 2026-03-31 03:56:53.937677 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-03-31 03:56:53.937691 | orchestrator | Tuesday 31 March 2026 03:56:21 +0000 (0:00:15.199) 0:02:55.666 ********* 2026-03-31 03:56:53.937704 | orchestrator | changed: [testbed-manager] 2026-03-31 03:56:53.937716 | orchestrator | 2026-03-31 03:56:53.937729 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-03-31 03:56:53.937741 | orchestrator | Tuesday 31 March 2026 03:56:30 +0000 (0:00:09.165) 0:03:04.832 ********* 2026-03-31 03:56:53.937753 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:56:53.937766 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:56:53.937778 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:56:53.937789 | orchestrator | 2026-03-31 03:56:53.937801 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-03-31 03:56:53.937812 | orchestrator | Tuesday 31 March 2026 03:56:41 +0000 (0:00:10.844) 0:03:15.677 ********* 2026-03-31 03:56:53.937824 | orchestrator | changed: [testbed-manager] 2026-03-31 03:56:53.937835 | orchestrator | 2026-03-31 03:56:53.937847 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-03-31 03:56:53.937859 | orchestrator | Tuesday 31 March 2026 03:56:47 +0000 (0:00:05.849) 0:03:21.526 ********* 2026-03-31 03:56:53.937870 | orchestrator | changed: [testbed-node-3] 2026-03-31 03:56:53.937882 | orchestrator | changed: [testbed-node-5] 2026-03-31 03:56:53.937893 | orchestrator | changed: [testbed-node-4] 2026-03-31 03:56:53.937905 | orchestrator | 2026-03-31 03:56:53.937917 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 03:56:53.937932 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-31 03:56:53.937947 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-31 03:56:53.937960 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-31 03:56:53.937972 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-31 03:56:53.937998 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-31 03:56:53.938010 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-31 03:56:53.938076 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-31 03:56:53.938089 | orchestrator | 2026-03-31 03:56:53.938102 | orchestrator | 2026-03-31 03:56:53.938116 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 03:56:53.938129 | orchestrator | Tuesday 31 March 2026 03:56:53 +0000 (0:00:05.732) 0:03:27.259 ********* 2026-03-31 03:56:53.938141 | orchestrator | =============================================================================== 2026-03-31 03:56:53.938154 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 28.28s 2026-03-31 03:56:53.938206 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 27.61s 2026-03-31 03:56:53.938221 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 17.97s 2026-03-31 03:56:53.938233 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 15.20s 2026-03-31 03:56:53.938246 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 14.24s 2026-03-31 03:56:53.938259 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 11.33s 2026-03-31 03:56:53.938271 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.87s 2026-03-31 03:56:53.938283 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.84s 2026-03-31 03:56:53.938296 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 9.17s 2026-03-31 03:56:53.938309 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.96s 2026-03-31 03:56:53.938322 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.85s 2026-03-31 03:56:53.938334 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 5.73s 2026-03-31 03:56:53.938347 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.63s 2026-03-31 03:56:53.938360 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.24s 2026-03-31 03:56:53.938374 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.98s 2026-03-31 03:56:53.938386 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.84s 2026-03-31 03:56:53.938400 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.25s 2026-03-31 03:56:53.938407 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.18s 2026-03-31 03:56:53.938414 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 1.99s 2026-03-31 03:56:53.938421 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.91s 2026-03-31 03:56:57.438007 | orchestrator | 2026-03-31 03:56:57 | INFO  | Task a166d561-79d5-4bf7-b886-7482084766f4 (grafana) was prepared for execution. 2026-03-31 03:56:57.438170 | orchestrator | 2026-03-31 03:56:57 | INFO  | It takes a moment until task a166d561-79d5-4bf7-b886-7482084766f4 (grafana) has been started and output is visible here. 2026-03-31 03:57:07.862853 | orchestrator | 2026-03-31 03:57:07.862973 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-31 03:57:07.862992 | orchestrator | 2026-03-31 03:57:07.863008 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-31 03:57:07.863023 | orchestrator | Tuesday 31 March 2026 03:57:02 +0000 (0:00:00.272) 0:00:00.272 ********* 2026-03-31 03:57:07.863038 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:57:07.863082 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:57:07.863098 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:57:07.863130 | orchestrator | 2026-03-31 03:57:07.863144 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-31 03:57:07.863159 | orchestrator | Tuesday 31 March 2026 03:57:02 +0000 (0:00:00.374) 0:00:00.646 ********* 2026-03-31 03:57:07.863173 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-03-31 03:57:07.863187 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-03-31 03:57:07.863202 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-03-31 03:57:07.863215 | orchestrator | 2026-03-31 03:57:07.863229 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-03-31 03:57:07.863243 | orchestrator | 2026-03-31 03:57:07.863257 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-31 03:57:07.863271 | orchestrator | Tuesday 31 March 2026 03:57:02 +0000 (0:00:00.481) 0:00:01.128 ********* 2026-03-31 03:57:07.863285 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 03:57:07.863301 | orchestrator | 2026-03-31 03:57:07.863315 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-03-31 03:57:07.863329 | orchestrator | Tuesday 31 March 2026 03:57:03 +0000 (0:00:00.582) 0:00:01.710 ********* 2026-03-31 03:57:07.863347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-31 03:57:07.863367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-31 03:57:07.863383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-31 03:57:07.863398 | orchestrator | 2026-03-31 03:57:07.863412 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-03-31 03:57:07.863427 | orchestrator | Tuesday 31 March 2026 03:57:04 +0000 (0:00:00.951) 0:00:02.662 ********* 2026-03-31 03:57:07.863442 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-03-31 03:57:07.863457 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-03-31 03:57:07.863471 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-31 03:57:07.863494 | orchestrator | 2026-03-31 03:57:07.863509 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-31 03:57:07.863524 | orchestrator | Tuesday 31 March 2026 03:57:05 +0000 (0:00:00.883) 0:00:03.545 ********* 2026-03-31 03:57:07.863554 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 03:57:07.863568 | orchestrator | 2026-03-31 03:57:07.863582 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-03-31 03:57:07.863618 | orchestrator | Tuesday 31 March 2026 03:57:05 +0000 (0:00:00.622) 0:00:04.168 ********* 2026-03-31 03:57:07.863657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-31 03:57:07.863674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-31 03:57:07.863690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-31 03:57:07.863706 | orchestrator | 2026-03-31 03:57:07.863721 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-03-31 03:57:07.863736 | orchestrator | Tuesday 31 March 2026 03:57:07 +0000 (0:00:01.292) 0:00:05.460 ********* 2026-03-31 03:57:07.863753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-31 03:57:07.863768 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:57:07.863784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-31 03:57:07.863809 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:57:07.863843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-31 03:57:15.217244 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:57:15.217319 | orchestrator | 2026-03-31 03:57:15.217327 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-03-31 03:57:15.217334 | orchestrator | Tuesday 31 March 2026 03:57:07 +0000 (0:00:00.659) 0:00:06.120 ********* 2026-03-31 03:57:15.217340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-31 03:57:15.217349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-31 03:57:15.217354 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:57:15.217359 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:57:15.217363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-31 03:57:15.217368 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:57:15.217373 | orchestrator | 2026-03-31 03:57:15.217378 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-03-31 03:57:15.217382 | orchestrator | Tuesday 31 March 2026 03:57:08 +0000 (0:00:00.676) 0:00:06.796 ********* 2026-03-31 03:57:15.217403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-31 03:57:15.217420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-31 03:57:15.217437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-31 03:57:15.217442 | orchestrator | 2026-03-31 03:57:15.217447 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-03-31 03:57:15.217451 | orchestrator | Tuesday 31 March 2026 03:57:09 +0000 (0:00:01.292) 0:00:08.089 ********* 2026-03-31 03:57:15.217456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-31 03:57:15.217461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-31 03:57:15.217466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-31 03:57:15.217474 | orchestrator | 2026-03-31 03:57:15.217479 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-03-31 03:57:15.217484 | orchestrator | Tuesday 31 March 2026 03:57:11 +0000 (0:00:01.739) 0:00:09.828 ********* 2026-03-31 03:57:15.217488 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:57:15.217493 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:57:15.217497 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:57:15.217502 | orchestrator | 2026-03-31 03:57:15.217506 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-03-31 03:57:15.217511 | orchestrator | Tuesday 31 March 2026 03:57:11 +0000 (0:00:00.366) 0:00:10.195 ********* 2026-03-31 03:57:15.217516 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-31 03:57:15.217521 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-31 03:57:15.217526 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-31 03:57:15.217530 | orchestrator | 2026-03-31 03:57:15.217535 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-03-31 03:57:15.217539 | orchestrator | Tuesday 31 March 2026 03:57:13 +0000 (0:00:01.341) 0:00:11.537 ********* 2026-03-31 03:57:15.217547 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-31 03:57:15.217553 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-31 03:57:15.217557 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-31 03:57:15.217562 | orchestrator | 2026-03-31 03:57:15.217566 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-03-31 03:57:15.217625 | orchestrator | Tuesday 31 March 2026 03:57:15 +0000 (0:00:01.935) 0:00:13.473 ********* 2026-03-31 03:57:21.999984 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-31 03:57:22.000066 | orchestrator | 2026-03-31 03:57:22.000076 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-03-31 03:57:22.000083 | orchestrator | Tuesday 31 March 2026 03:57:16 +0000 (0:00:00.898) 0:00:14.372 ********* 2026-03-31 03:57:22.000089 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-03-31 03:57:22.000095 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-03-31 03:57:22.000101 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:57:22.000108 | orchestrator | ok: [testbed-node-1] 2026-03-31 03:57:22.000113 | orchestrator | ok: [testbed-node-2] 2026-03-31 03:57:22.000119 | orchestrator | 2026-03-31 03:57:22.000124 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-03-31 03:57:22.000130 | orchestrator | Tuesday 31 March 2026 03:57:16 +0000 (0:00:00.749) 0:00:15.122 ********* 2026-03-31 03:57:22.000135 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:57:22.000141 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:57:22.000146 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:57:22.000151 | orchestrator | 2026-03-31 03:57:22.000157 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-03-31 03:57:22.000162 | orchestrator | Tuesday 31 March 2026 03:57:17 +0000 (0:00:00.394) 0:00:15.516 ********* 2026-03-31 03:57:22.000171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1318542, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.1938846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:22.000199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1318542, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.1938846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:22.000205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1318542, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.1938846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:22.000212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1318773, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2372649, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:22.000246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1318773, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2372649, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:22.000256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1318773, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2372649, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:22.000266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1318585, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2008116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:22.000284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1318585, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2008116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:22.000293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1318585, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2008116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:22.000303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1318777, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2400024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:22.000317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1318777, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2400024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:22.000333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1318777, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2400024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:25.742814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1318598, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2041261, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:25.742908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1318598, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2041261, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:25.742915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1318598, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2041261, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:25.742920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1318619, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.234658, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:25.742935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1318619, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.234658, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:25.742939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1318619, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.234658, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:25.742954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1318538, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.1913023, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:25.742962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1318538, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.1913023, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:25.742966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1318538, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.1913023, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:25.742970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1318559, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.199258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:25.742974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1318559, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.199258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:25.742980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1318559, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.199258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:25.742988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1318593, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2016633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:29.737429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1318593, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2016633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:29.737531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1318593, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2016633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:29.737616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1318607, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2064893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:29.737628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1318607, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2064893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:29.737654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1318607, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2064893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:29.737665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1318770, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.236144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:29.737715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1318770, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.236144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:29.737727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1318770, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.236144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:29.737738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1318581, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2000177, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:29.737748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1318581, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2000177, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:29.737758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1318581, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2000177, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:29.737773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1318618, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2082696, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:29.737800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1318618, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2082696, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:33.798914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1318618, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2082696, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:33.799008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1318601, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2059429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:33.799019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1318601, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2059429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:33.799026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1318601, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2059429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:33.799050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1318597, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2037687, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:33.799078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1318597, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2037687, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:33.799101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1318597, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2037687, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:33.799109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1318596, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2031841, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:33.799117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1318596, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2031841, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:33.799124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1318596, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2031841, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:33.799135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1318610, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2081003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:33.799143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1318610, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2081003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:33.799163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1318610, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2081003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:37.634827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1318594, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2023237, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:37.634931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1318594, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2023237, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:37.634945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1318594, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2023237, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:37.634957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1318768, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2351265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:37.634984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1318768, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2351265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:37.635017 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1318768, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2351265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:37.635046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1319286, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3643756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:37.635058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1319286, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3643756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:37.635068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1319286, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3643756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:37.635078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1318830, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2779682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:37.635094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1318830, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2779682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:37.635111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1318830, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2779682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:37.635128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1318807, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.244719, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:42.054731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1318807, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.244719, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:42.054853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1318807, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.244719, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:42.054870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1319010, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2806082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:42.054904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1319010, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2806082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:42.054939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1319010, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2806082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:42.054952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1318789, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2414367, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:42.054984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1318789, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2414367, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:42.054997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1318789, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2414367, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:42.055008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1319250, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3518732, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:42.055020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1319250, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3518732, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:42.055043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1319250, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3518732, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:42.055055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1319014, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3471282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:42.055075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1319014, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3471282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:45.620627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1319255, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3529742, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:45.620737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1319014, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3471282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:45.620779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1319255, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3529742, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:45.620807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1319282, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3604643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:45.620819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1319282, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3604643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:45.620829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1319255, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3529742, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:45.620859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1319247, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3501282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:45.620902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1319247, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3501282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:45.620923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1319282, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3604643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:45.620941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1319000, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2795417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:45.620953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1319000, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2795417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:45.620965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1319247, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3501282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:45.620985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1318822, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2496378, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:49.352762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1318822, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2496378, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:49.352897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1319000, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2795417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:49.352928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1318999, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.278372, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:49.352942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1318999, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.278372, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:49.352953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1318822, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2496378, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:49.352965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1318810, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2478564, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:49.352996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1318810, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2478564, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:49.353022 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1318999, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.278372, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:49.353041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1319007, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.280343, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:49.353054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1319007, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.280343, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:49.353066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1318810, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2478564, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:49.353077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1319264, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.358942, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:49.353094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1319264, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.358942, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:53.548193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1319007, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.280343, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:53.548323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1319259, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3551283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:53.548375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1319259, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3551283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:53.548391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1319264, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.358942, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:53.548404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1318791, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2427871, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:53.548416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1318791, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2427871, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:53.548471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1319259, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3551283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:53.548539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1318799, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.243882, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:53.548558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1318799, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.243882, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:53.548570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1318791, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.2427871, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:53.548581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1319239, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3497062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:53.548593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1319239, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3497062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:57:53.548624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1318799, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.243882, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:59:36.861382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1319257, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3531284, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:59:36.861500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1319257, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3531284, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:59:36.861512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1319239, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3497062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:59:36.861521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1319257, 'dev': 83, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774922001.3531284, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-31 03:59:36.861528 | orchestrator | 2026-03-31 03:59:36.861536 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-03-31 03:59:36.861544 | orchestrator | Tuesday 31 March 2026 03:57:55 +0000 (0:00:37.974) 0:00:53.491 ********* 2026-03-31 03:59:36.861551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-31 03:59:36.861588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-31 03:59:36.861596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-31 03:59:36.861602 | orchestrator | 2026-03-31 03:59:36.861608 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-03-31 03:59:36.861615 | orchestrator | Tuesday 31 March 2026 03:57:56 +0000 (0:00:01.019) 0:00:54.511 ********* 2026-03-31 03:59:36.861621 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:59:36.861628 | orchestrator | 2026-03-31 03:59:36.861638 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-03-31 03:59:36.861644 | orchestrator | Tuesday 31 March 2026 03:57:58 +0000 (0:00:02.201) 0:00:56.712 ********* 2026-03-31 03:59:36.861650 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:59:36.861657 | orchestrator | 2026-03-31 03:59:36.861663 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-31 03:59:36.861669 | orchestrator | Tuesday 31 March 2026 03:58:00 +0000 (0:00:02.197) 0:00:58.909 ********* 2026-03-31 03:59:36.861675 | orchestrator | 2026-03-31 03:59:36.861681 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-31 03:59:36.861687 | orchestrator | Tuesday 31 March 2026 03:58:00 +0000 (0:00:00.097) 0:00:59.007 ********* 2026-03-31 03:59:36.861693 | orchestrator | 2026-03-31 03:59:36.861699 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-31 03:59:36.861706 | orchestrator | Tuesday 31 March 2026 03:58:00 +0000 (0:00:00.090) 0:00:59.098 ********* 2026-03-31 03:59:36.861713 | orchestrator | 2026-03-31 03:59:36.861719 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-03-31 03:59:36.861725 | orchestrator | Tuesday 31 March 2026 03:58:00 +0000 (0:00:00.073) 0:00:59.171 ********* 2026-03-31 03:59:36.861731 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:59:36.861737 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:59:36.861743 | orchestrator | changed: [testbed-node-0] 2026-03-31 03:59:36.861749 | orchestrator | 2026-03-31 03:59:36.861756 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-03-31 03:59:36.861762 | orchestrator | Tuesday 31 March 2026 03:58:03 +0000 (0:00:02.142) 0:01:01.313 ********* 2026-03-31 03:59:36.861768 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:59:36.861774 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:59:36.861785 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-03-31 03:59:36.861793 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-03-31 03:59:36.861799 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-03-31 03:59:36.861805 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-03-31 03:59:36.861811 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:59:36.861818 | orchestrator | 2026-03-31 03:59:36.861825 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-03-31 03:59:36.861832 | orchestrator | Tuesday 31 March 2026 03:58:52 +0000 (0:00:49.742) 0:01:51.056 ********* 2026-03-31 03:59:36.861839 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:59:36.861847 | orchestrator | changed: [testbed-node-1] 2026-03-31 03:59:36.861854 | orchestrator | changed: [testbed-node-2] 2026-03-31 03:59:36.861860 | orchestrator | 2026-03-31 03:59:36.861867 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-03-31 03:59:36.861875 | orchestrator | Tuesday 31 March 2026 03:59:31 +0000 (0:00:39.030) 0:02:30.086 ********* 2026-03-31 03:59:36.861882 | orchestrator | ok: [testbed-node-0] 2026-03-31 03:59:36.861889 | orchestrator | 2026-03-31 03:59:36.861896 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-03-31 03:59:36.861903 | orchestrator | Tuesday 31 March 2026 03:59:33 +0000 (0:00:02.093) 0:02:32.180 ********* 2026-03-31 03:59:36.861910 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:59:36.861917 | orchestrator | skipping: [testbed-node-1] 2026-03-31 03:59:36.861924 | orchestrator | skipping: [testbed-node-2] 2026-03-31 03:59:36.861931 | orchestrator | 2026-03-31 03:59:36.861938 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-03-31 03:59:36.861945 | orchestrator | Tuesday 31 March 2026 03:59:34 +0000 (0:00:00.330) 0:02:32.510 ********* 2026-03-31 03:59:36.861954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-03-31 03:59:36.861969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-03-31 03:59:37.576834 | orchestrator | 2026-03-31 03:59:37.576928 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-03-31 03:59:37.576938 | orchestrator | Tuesday 31 March 2026 03:59:36 +0000 (0:00:02.605) 0:02:35.115 ********* 2026-03-31 03:59:37.576943 | orchestrator | skipping: [testbed-node-0] 2026-03-31 03:59:37.576949 | orchestrator | 2026-03-31 03:59:37.576954 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 03:59:37.576960 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-31 03:59:37.576966 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-31 03:59:37.576971 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-31 03:59:37.576975 | orchestrator | 2026-03-31 03:59:37.576980 | orchestrator | 2026-03-31 03:59:37.576984 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 03:59:37.577006 | orchestrator | Tuesday 31 March 2026 03:59:37 +0000 (0:00:00.304) 0:02:35.420 ********* 2026-03-31 03:59:37.577027 | orchestrator | =============================================================================== 2026-03-31 03:59:37.577032 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 49.74s 2026-03-31 03:59:37.577036 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 39.03s 2026-03-31 03:59:37.577041 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.97s 2026-03-31 03:59:37.577045 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.61s 2026-03-31 03:59:37.577050 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.20s 2026-03-31 03:59:37.577054 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.20s 2026-03-31 03:59:37.577058 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.14s 2026-03-31 03:59:37.577063 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.09s 2026-03-31 03:59:37.577067 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.94s 2026-03-31 03:59:37.577072 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.74s 2026-03-31 03:59:37.577076 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.34s 2026-03-31 03:59:37.577080 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.29s 2026-03-31 03:59:37.577085 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.29s 2026-03-31 03:59:37.577089 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.02s 2026-03-31 03:59:37.577094 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.95s 2026-03-31 03:59:37.577098 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.90s 2026-03-31 03:59:37.577103 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.88s 2026-03-31 03:59:37.577107 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.75s 2026-03-31 03:59:37.577111 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.68s 2026-03-31 03:59:37.577116 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 0.66s 2026-03-31 03:59:37.943317 | orchestrator | + sh -c /opt/configuration/scripts/deploy/510-clusterapi.sh 2026-03-31 03:59:37.952217 | orchestrator | + set -e 2026-03-31 03:59:37.952315 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-31 03:59:37.953218 | orchestrator | ++ export INTERACTIVE=false 2026-03-31 03:59:37.953271 | orchestrator | ++ INTERACTIVE=false 2026-03-31 03:59:37.953279 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-31 03:59:37.953287 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-31 03:59:37.953439 | orchestrator | + source /opt/manager-vars.sh 2026-03-31 03:59:37.954596 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-31 03:59:37.954727 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-31 03:59:37.954742 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-31 03:59:37.954756 | orchestrator | ++ CEPH_VERSION=reef 2026-03-31 03:59:37.954765 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-31 03:59:37.954863 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-31 03:59:37.954875 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-31 03:59:37.954883 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-31 03:59:37.954890 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-31 03:59:37.954897 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-31 03:59:37.954905 | orchestrator | ++ export ARA=false 2026-03-31 03:59:37.954912 | orchestrator | ++ ARA=false 2026-03-31 03:59:37.954986 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-31 03:59:37.955007 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-31 03:59:37.955015 | orchestrator | ++ export TEMPEST=false 2026-03-31 03:59:37.955022 | orchestrator | ++ TEMPEST=false 2026-03-31 03:59:37.955029 | orchestrator | ++ export IS_ZUUL=true 2026-03-31 03:59:37.955039 | orchestrator | ++ IS_ZUUL=true 2026-03-31 03:59:37.955047 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.240 2026-03-31 03:59:37.955054 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.240 2026-03-31 03:59:37.955061 | orchestrator | ++ export EXTERNAL_API=false 2026-03-31 03:59:37.955069 | orchestrator | ++ EXTERNAL_API=false 2026-03-31 03:59:37.955076 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-31 03:59:37.955105 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-31 03:59:37.955113 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-31 03:59:37.955120 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-31 03:59:37.955127 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-31 03:59:37.955134 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-31 03:59:37.956433 | orchestrator | ++ semver 9.5.0 8.0.0 2026-03-31 03:59:38.004896 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-31 03:59:38.004991 | orchestrator | + osism apply clusterapi 2026-03-31 03:59:40.214368 | orchestrator | 2026-03-31 03:59:40 | INFO  | Task 58f19abd-aded-4948-a246-c50eb995b4e1 (clusterapi) was prepared for execution. 2026-03-31 03:59:40.214478 | orchestrator | 2026-03-31 03:59:40 | INFO  | It takes a moment until task 58f19abd-aded-4948-a246-c50eb995b4e1 (clusterapi) has been started and output is visible here. 2026-03-31 04:00:41.363810 | orchestrator | 2026-03-31 04:00:41.363906 | orchestrator | PLAY [Apply cert_manager role] ************************************************* 2026-03-31 04:00:41.363916 | orchestrator | 2026-03-31 04:00:41.363923 | orchestrator | TASK [Include cert_manager role] *********************************************** 2026-03-31 04:00:41.363930 | orchestrator | Tuesday 31 March 2026 03:59:45 +0000 (0:00:00.213) 0:00:00.213 ********* 2026-03-31 04:00:41.363938 | orchestrator | included: cert_manager for testbed-manager 2026-03-31 04:00:41.363945 | orchestrator | 2026-03-31 04:00:41.363952 | orchestrator | TASK [cert_manager : Deploy cert-manager crds] ********************************* 2026-03-31 04:00:41.363959 | orchestrator | Tuesday 31 March 2026 03:59:45 +0000 (0:00:00.296) 0:00:00.510 ********* 2026-03-31 04:00:41.363966 | orchestrator | changed: [testbed-manager] 2026-03-31 04:00:41.363973 | orchestrator | 2026-03-31 04:00:41.363979 | orchestrator | TASK [cert_manager : Deploy cert-manager] ************************************** 2026-03-31 04:00:41.363987 | orchestrator | Tuesday 31 March 2026 03:59:51 +0000 (0:00:05.697) 0:00:06.207 ********* 2026-03-31 04:00:41.363993 | orchestrator | changed: [testbed-manager] 2026-03-31 04:00:41.364000 | orchestrator | 2026-03-31 04:00:41.364006 | orchestrator | PLAY [Initialize or upgrade the CAPI management cluster] *********************** 2026-03-31 04:00:41.364012 | orchestrator | 2026-03-31 04:00:41.364019 | orchestrator | TASK [Get capi-system namespace phase] ***************************************** 2026-03-31 04:00:41.364043 | orchestrator | Tuesday 31 March 2026 04:00:19 +0000 (0:00:28.601) 0:00:34.810 ********* 2026-03-31 04:00:41.364050 | orchestrator | ok: [testbed-manager] 2026-03-31 04:00:41.364056 | orchestrator | 2026-03-31 04:00:41.364063 | orchestrator | TASK [Set capi-system-phase fact] ********************************************** 2026-03-31 04:00:41.364069 | orchestrator | Tuesday 31 March 2026 04:00:20 +0000 (0:00:01.310) 0:00:36.121 ********* 2026-03-31 04:00:41.364076 | orchestrator | ok: [testbed-manager] 2026-03-31 04:00:41.364083 | orchestrator | 2026-03-31 04:00:41.364090 | orchestrator | TASK [Initialize the CAPI management cluster] ********************************** 2026-03-31 04:00:41.364140 | orchestrator | Tuesday 31 March 2026 04:00:21 +0000 (0:00:00.149) 0:00:36.271 ********* 2026-03-31 04:00:41.364146 | orchestrator | ok: [testbed-manager] 2026-03-31 04:00:41.364151 | orchestrator | 2026-03-31 04:00:41.364157 | orchestrator | TASK [Upgrade the CAPI management cluster] ************************************* 2026-03-31 04:00:41.364163 | orchestrator | Tuesday 31 March 2026 04:00:38 +0000 (0:00:17.266) 0:00:53.537 ********* 2026-03-31 04:00:41.364169 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:00:41.364175 | orchestrator | 2026-03-31 04:00:41.364181 | orchestrator | TASK [Install openstack-resource-controller] *********************************** 2026-03-31 04:00:41.364186 | orchestrator | Tuesday 31 March 2026 04:00:38 +0000 (0:00:00.150) 0:00:53.688 ********* 2026-03-31 04:00:41.364192 | orchestrator | changed: [testbed-manager] 2026-03-31 04:00:41.364198 | orchestrator | 2026-03-31 04:00:41.364203 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 04:00:41.364211 | orchestrator | testbed-manager : ok=7  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-31 04:00:41.364217 | orchestrator | 2026-03-31 04:00:41.364223 | orchestrator | 2026-03-31 04:00:41.364229 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 04:00:41.364257 | orchestrator | Tuesday 31 March 2026 04:00:40 +0000 (0:00:02.413) 0:00:56.101 ********* 2026-03-31 04:00:41.364262 | orchestrator | =============================================================================== 2026-03-31 04:00:41.364268 | orchestrator | cert_manager : Deploy cert-manager ------------------------------------- 28.60s 2026-03-31 04:00:41.364274 | orchestrator | Initialize the CAPI management cluster --------------------------------- 17.27s 2026-03-31 04:00:41.364280 | orchestrator | cert_manager : Deploy cert-manager crds --------------------------------- 5.70s 2026-03-31 04:00:41.364286 | orchestrator | Install openstack-resource-controller ----------------------------------- 2.41s 2026-03-31 04:00:41.364291 | orchestrator | Get capi-system namespace phase ----------------------------------------- 1.31s 2026-03-31 04:00:41.364297 | orchestrator | Include cert_manager role ----------------------------------------------- 0.30s 2026-03-31 04:00:41.364303 | orchestrator | Upgrade the CAPI management cluster ------------------------------------- 0.15s 2026-03-31 04:00:41.364309 | orchestrator | Set capi-system-phase fact ---------------------------------------------- 0.15s 2026-03-31 04:00:41.729555 | orchestrator | + osism apply magnum 2026-03-31 04:00:44.001700 | orchestrator | 2026-03-31 04:00:44 | INFO  | Task 988dfb27-e3b3-4ee0-985f-a3a19fda7c06 (magnum) was prepared for execution. 2026-03-31 04:00:44.001794 | orchestrator | 2026-03-31 04:00:44 | INFO  | It takes a moment until task 988dfb27-e3b3-4ee0-985f-a3a19fda7c06 (magnum) has been started and output is visible here. 2026-03-31 04:01:25.580707 | orchestrator | 2026-03-31 04:01:25.580841 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-31 04:01:25.580869 | orchestrator | 2026-03-31 04:01:25.580889 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-31 04:01:25.580910 | orchestrator | Tuesday 31 March 2026 04:00:48 +0000 (0:00:00.274) 0:00:00.274 ********* 2026-03-31 04:01:25.580930 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:01:25.580951 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:01:25.580962 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:01:25.580973 | orchestrator | 2026-03-31 04:01:25.580984 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-31 04:01:25.580995 | orchestrator | Tuesday 31 March 2026 04:00:49 +0000 (0:00:00.381) 0:00:00.656 ********* 2026-03-31 04:01:25.581078 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-03-31 04:01:25.581093 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-03-31 04:01:25.581104 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-03-31 04:01:25.581115 | orchestrator | 2026-03-31 04:01:25.581126 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-03-31 04:01:25.581137 | orchestrator | 2026-03-31 04:01:25.581148 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-31 04:01:25.581159 | orchestrator | Tuesday 31 March 2026 04:00:49 +0000 (0:00:00.535) 0:00:01.192 ********* 2026-03-31 04:01:25.581170 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 04:01:25.581182 | orchestrator | 2026-03-31 04:01:25.581193 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-03-31 04:01:25.581204 | orchestrator | Tuesday 31 March 2026 04:00:50 +0000 (0:00:00.698) 0:00:01.890 ********* 2026-03-31 04:01:25.581215 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-03-31 04:01:25.581226 | orchestrator | 2026-03-31 04:01:25.581237 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-03-31 04:01:25.581248 | orchestrator | Tuesday 31 March 2026 04:00:53 +0000 (0:00:03.322) 0:00:05.213 ********* 2026-03-31 04:01:25.581259 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-03-31 04:01:25.581271 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-03-31 04:01:25.581282 | orchestrator | 2026-03-31 04:01:25.581320 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-03-31 04:01:25.581347 | orchestrator | Tuesday 31 March 2026 04:00:59 +0000 (0:00:06.030) 0:00:11.244 ********* 2026-03-31 04:01:25.581358 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-31 04:01:25.581370 | orchestrator | 2026-03-31 04:01:25.581380 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-03-31 04:01:25.581391 | orchestrator | Tuesday 31 March 2026 04:01:03 +0000 (0:00:03.346) 0:00:14.590 ********* 2026-03-31 04:01:25.581402 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-31 04:01:25.581413 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-03-31 04:01:25.581424 | orchestrator | 2026-03-31 04:01:25.581435 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-03-31 04:01:25.581445 | orchestrator | Tuesday 31 March 2026 04:01:06 +0000 (0:00:03.703) 0:00:18.293 ********* 2026-03-31 04:01:25.581456 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-31 04:01:25.581467 | orchestrator | 2026-03-31 04:01:25.581477 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-03-31 04:01:25.581488 | orchestrator | Tuesday 31 March 2026 04:01:09 +0000 (0:00:03.117) 0:00:21.411 ********* 2026-03-31 04:01:25.581498 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-03-31 04:01:25.581509 | orchestrator | 2026-03-31 04:01:25.581520 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-03-31 04:01:25.581530 | orchestrator | Tuesday 31 March 2026 04:01:13 +0000 (0:00:03.659) 0:00:25.071 ********* 2026-03-31 04:01:25.581541 | orchestrator | changed: [testbed-node-0] 2026-03-31 04:01:25.581552 | orchestrator | 2026-03-31 04:01:25.581562 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-03-31 04:01:25.581574 | orchestrator | Tuesday 31 March 2026 04:01:16 +0000 (0:00:03.174) 0:00:28.246 ********* 2026-03-31 04:01:25.581585 | orchestrator | changed: [testbed-node-0] 2026-03-31 04:01:25.581595 | orchestrator | 2026-03-31 04:01:25.581606 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-03-31 04:01:25.581617 | orchestrator | Tuesday 31 March 2026 04:01:20 +0000 (0:00:03.823) 0:00:32.070 ********* 2026-03-31 04:01:25.581628 | orchestrator | changed: [testbed-node-0] 2026-03-31 04:01:25.581638 | orchestrator | 2026-03-31 04:01:25.581649 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-03-31 04:01:25.581660 | orchestrator | Tuesday 31 March 2026 04:01:23 +0000 (0:00:03.342) 0:00:35.412 ********* 2026-03-31 04:01:25.581695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-31 04:01:25.581712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-31 04:01:25.581737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-31 04:01:25.581750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-31 04:01:25.581762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-31 04:01:25.581781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-31 04:01:33.424866 | orchestrator | 2026-03-31 04:01:33.424950 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-03-31 04:01:33.424961 | orchestrator | Tuesday 31 March 2026 04:01:25 +0000 (0:00:01.667) 0:00:37.080 ********* 2026-03-31 04:01:33.424969 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:01:33.424978 | orchestrator | 2026-03-31 04:01:33.424985 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-03-31 04:01:33.425060 | orchestrator | Tuesday 31 March 2026 04:01:25 +0000 (0:00:00.158) 0:00:37.238 ********* 2026-03-31 04:01:33.425070 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:01:33.425076 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:01:33.425082 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:01:33.425088 | orchestrator | 2026-03-31 04:01:33.425094 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-03-31 04:01:33.425100 | orchestrator | Tuesday 31 March 2026 04:01:26 +0000 (0:00:00.327) 0:00:37.566 ********* 2026-03-31 04:01:33.425105 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-31 04:01:33.425111 | orchestrator | 2026-03-31 04:01:33.425117 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-03-31 04:01:33.425123 | orchestrator | Tuesday 31 March 2026 04:01:27 +0000 (0:00:00.968) 0:00:38.535 ********* 2026-03-31 04:01:33.425143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-31 04:01:33.425153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-31 04:01:33.425159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-31 04:01:33.425180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-31 04:01:33.425194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-31 04:01:33.425204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-31 04:01:33.425210 | orchestrator | 2026-03-31 04:01:33.425216 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-03-31 04:01:33.425222 | orchestrator | Tuesday 31 March 2026 04:01:29 +0000 (0:00:02.475) 0:00:41.010 ********* 2026-03-31 04:01:33.425228 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:01:33.425235 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:01:33.425241 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:01:33.425247 | orchestrator | 2026-03-31 04:01:33.425253 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-31 04:01:33.425258 | orchestrator | Tuesday 31 March 2026 04:01:30 +0000 (0:00:00.598) 0:00:41.608 ********* 2026-03-31 04:01:33.425265 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 04:01:33.425271 | orchestrator | 2026-03-31 04:01:33.425277 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-03-31 04:01:33.425282 | orchestrator | Tuesday 31 March 2026 04:01:30 +0000 (0:00:00.640) 0:00:42.249 ********* 2026-03-31 04:01:33.425289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-31 04:01:33.425300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-31 04:01:34.431815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-31 04:01:34.431940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-31 04:01:34.431957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-31 04:01:34.431970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-31 04:01:34.432085 | orchestrator | 2026-03-31 04:01:34.432111 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-03-31 04:01:34.432131 | orchestrator | Tuesday 31 March 2026 04:01:33 +0000 (0:00:02.680) 0:00:44.929 ********* 2026-03-31 04:01:34.432173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-31 04:01:34.432187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-31 04:01:34.432198 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:01:34.432218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-31 04:01:34.432230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-31 04:01:34.432242 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:01:34.432255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-31 04:01:34.432309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-31 04:01:38.352291 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:01:38.352383 | orchestrator | 2026-03-31 04:01:38.352399 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-03-31 04:01:38.352416 | orchestrator | Tuesday 31 March 2026 04:01:34 +0000 (0:00:01.002) 0:00:45.931 ********* 2026-03-31 04:01:38.352435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-31 04:01:38.352473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-31 04:01:38.352490 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:01:38.352507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-31 04:01:38.352539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-31 04:01:38.352549 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:01:38.352575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-31 04:01:38.352590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-31 04:01:38.352600 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:01:38.352608 | orchestrator | 2026-03-31 04:01:38.352617 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-03-31 04:01:38.352626 | orchestrator | Tuesday 31 March 2026 04:01:35 +0000 (0:00:01.088) 0:00:47.020 ********* 2026-03-31 04:01:38.352636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-31 04:01:38.352652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-31 04:01:38.352669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-31 04:01:45.109816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-31 04:01:45.109948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-31 04:01:45.109965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-31 04:01:45.110115 | orchestrator | 2026-03-31 04:01:45.110129 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-03-31 04:01:45.110140 | orchestrator | Tuesday 31 March 2026 04:01:38 +0000 (0:00:02.836) 0:00:49.857 ********* 2026-03-31 04:01:45.110150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-31 04:01:45.110180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-31 04:01:45.110192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-31 04:01:45.110208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-31 04:01:45.110226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-31 04:01:45.110236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-31 04:01:45.110247 | orchestrator | 2026-03-31 04:01:45.110257 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-03-31 04:01:45.110267 | orchestrator | Tuesday 31 March 2026 04:01:44 +0000 (0:00:05.979) 0:00:55.837 ********* 2026-03-31 04:01:45.110296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-31 04:01:47.121121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-31 04:01:47.121222 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:01:47.121250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-31 04:01:47.121285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-31 04:01:47.121295 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:01:47.121305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-31 04:01:47.121329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-31 04:01:47.121337 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:01:47.121345 | orchestrator | 2026-03-31 04:01:47.121355 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-03-31 04:01:47.121365 | orchestrator | Tuesday 31 March 2026 04:01:45 +0000 (0:00:00.782) 0:00:56.620 ********* 2026-03-31 04:01:47.121379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-31 04:01:47.121396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-31 04:01:47.121407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-31 04:01:47.121417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-31 04:01:47.121434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-31 04:02:46.256723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-31 04:02:46.256945 | orchestrator | 2026-03-31 04:02:46.256974 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-31 04:02:46.256987 | orchestrator | Tuesday 31 March 2026 04:01:47 +0000 (0:00:02.006) 0:00:58.627 ********* 2026-03-31 04:02:46.256997 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:02:46.257008 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:02:46.257018 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:02:46.257027 | orchestrator | 2026-03-31 04:02:46.257037 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-03-31 04:02:46.257046 | orchestrator | Tuesday 31 March 2026 04:01:47 +0000 (0:00:00.633) 0:00:59.261 ********* 2026-03-31 04:02:46.257056 | orchestrator | changed: [testbed-node-0] 2026-03-31 04:02:46.257065 | orchestrator | 2026-03-31 04:02:46.257075 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-03-31 04:02:46.257084 | orchestrator | Tuesday 31 March 2026 04:01:49 +0000 (0:00:02.083) 0:01:01.344 ********* 2026-03-31 04:02:46.257094 | orchestrator | changed: [testbed-node-0] 2026-03-31 04:02:46.257103 | orchestrator | 2026-03-31 04:02:46.257112 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-03-31 04:02:46.257122 | orchestrator | Tuesday 31 March 2026 04:01:52 +0000 (0:00:02.203) 0:01:03.547 ********* 2026-03-31 04:02:46.257131 | orchestrator | changed: [testbed-node-0] 2026-03-31 04:02:46.257141 | orchestrator | 2026-03-31 04:02:46.257150 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-31 04:02:46.257160 | orchestrator | Tuesday 31 March 2026 04:02:07 +0000 (0:00:15.915) 0:01:19.463 ********* 2026-03-31 04:02:46.257169 | orchestrator | 2026-03-31 04:02:46.257178 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-31 04:02:46.257188 | orchestrator | Tuesday 31 March 2026 04:02:08 +0000 (0:00:00.124) 0:01:19.587 ********* 2026-03-31 04:02:46.257197 | orchestrator | 2026-03-31 04:02:46.257207 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-31 04:02:46.257216 | orchestrator | Tuesday 31 March 2026 04:02:08 +0000 (0:00:00.108) 0:01:19.696 ********* 2026-03-31 04:02:46.257226 | orchestrator | 2026-03-31 04:02:46.257238 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-03-31 04:02:46.257249 | orchestrator | Tuesday 31 March 2026 04:02:08 +0000 (0:00:00.077) 0:01:19.773 ********* 2026-03-31 04:02:46.257261 | orchestrator | changed: [testbed-node-0] 2026-03-31 04:02:46.257272 | orchestrator | changed: [testbed-node-1] 2026-03-31 04:02:46.257283 | orchestrator | changed: [testbed-node-2] 2026-03-31 04:02:46.257294 | orchestrator | 2026-03-31 04:02:46.257305 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-03-31 04:02:46.257316 | orchestrator | Tuesday 31 March 2026 04:02:29 +0000 (0:00:21.317) 0:01:41.090 ********* 2026-03-31 04:02:46.257327 | orchestrator | changed: [testbed-node-0] 2026-03-31 04:02:46.257339 | orchestrator | changed: [testbed-node-1] 2026-03-31 04:02:46.257350 | orchestrator | changed: [testbed-node-2] 2026-03-31 04:02:46.257360 | orchestrator | 2026-03-31 04:02:46.257371 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 04:02:46.257384 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-31 04:02:46.257397 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-31 04:02:46.257409 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-31 04:02:46.257429 | orchestrator | 2026-03-31 04:02:46.257440 | orchestrator | 2026-03-31 04:02:46.257452 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 04:02:46.257462 | orchestrator | Tuesday 31 March 2026 04:02:45 +0000 (0:00:16.189) 0:01:57.280 ********* 2026-03-31 04:02:46.257474 | orchestrator | =============================================================================== 2026-03-31 04:02:46.257485 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 21.32s 2026-03-31 04:02:46.257497 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 16.19s 2026-03-31 04:02:46.257508 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.92s 2026-03-31 04:02:46.257520 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.03s 2026-03-31 04:02:46.257531 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.98s 2026-03-31 04:02:46.257541 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.82s 2026-03-31 04:02:46.257552 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.70s 2026-03-31 04:02:46.257583 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.66s 2026-03-31 04:02:46.257595 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.35s 2026-03-31 04:02:46.257606 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.34s 2026-03-31 04:02:46.257618 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.32s 2026-03-31 04:02:46.257629 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.17s 2026-03-31 04:02:46.257647 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.12s 2026-03-31 04:02:46.257657 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.84s 2026-03-31 04:02:46.257666 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.68s 2026-03-31 04:02:46.257676 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.48s 2026-03-31 04:02:46.257685 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.20s 2026-03-31 04:02:46.257694 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.08s 2026-03-31 04:02:46.257704 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.01s 2026-03-31 04:02:46.257713 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.67s 2026-03-31 04:02:46.995265 | orchestrator | ok: Runtime: 1:45:45.185421 2026-03-31 04:02:47.271512 | 2026-03-31 04:02:47.271656 | TASK [Deploy in a nutshell] 2026-03-31 04:02:47.807680 | orchestrator | skipping: Conditional result was False 2026-03-31 04:02:47.848476 | 2026-03-31 04:02:47.848696 | TASK [Bootstrap services] 2026-03-31 04:02:48.584923 | orchestrator | 2026-03-31 04:02:48.585163 | orchestrator | # BOOTSTRAP 2026-03-31 04:02:48.585198 | orchestrator | 2026-03-31 04:02:48.585220 | orchestrator | + set -e 2026-03-31 04:02:48.585241 | orchestrator | + echo 2026-03-31 04:02:48.585262 | orchestrator | + echo '# BOOTSTRAP' 2026-03-31 04:02:48.585290 | orchestrator | + echo 2026-03-31 04:02:48.585352 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-03-31 04:02:48.590427 | orchestrator | + set -e 2026-03-31 04:02:48.590569 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-03-31 04:02:50.810298 | orchestrator | 2026-03-31 04:02:50 | INFO  | It takes a moment until task 9ae8b387-e7a7-426e-a9fb-7f0f03e90572 (flavor-manager) has been started and output is visible here. 2026-03-31 04:02:58.959563 | orchestrator | 2026-03-31 04:02:54 | INFO  | Flavor SCS-1L-1 created 2026-03-31 04:02:58.960428 | orchestrator | 2026-03-31 04:02:54 | INFO  | Flavor SCS-1L-1-5 created 2026-03-31 04:02:58.960455 | orchestrator | 2026-03-31 04:02:54 | INFO  | Flavor SCS-1V-2 created 2026-03-31 04:02:58.960461 | orchestrator | 2026-03-31 04:02:55 | INFO  | Flavor SCS-1V-2-5 created 2026-03-31 04:02:58.960467 | orchestrator | 2026-03-31 04:02:55 | INFO  | Flavor SCS-1V-4 created 2026-03-31 04:02:58.960472 | orchestrator | 2026-03-31 04:02:55 | INFO  | Flavor SCS-1V-4-10 created 2026-03-31 04:02:58.960477 | orchestrator | 2026-03-31 04:02:55 | INFO  | Flavor SCS-1V-8 created 2026-03-31 04:02:58.960484 | orchestrator | 2026-03-31 04:02:55 | INFO  | Flavor SCS-1V-8-20 created 2026-03-31 04:02:58.960496 | orchestrator | 2026-03-31 04:02:55 | INFO  | Flavor SCS-2V-4 created 2026-03-31 04:02:58.960502 | orchestrator | 2026-03-31 04:02:55 | INFO  | Flavor SCS-2V-4-10 created 2026-03-31 04:02:58.960507 | orchestrator | 2026-03-31 04:02:56 | INFO  | Flavor SCS-2V-8 created 2026-03-31 04:02:58.960512 | orchestrator | 2026-03-31 04:02:56 | INFO  | Flavor SCS-2V-8-20 created 2026-03-31 04:02:58.960517 | orchestrator | 2026-03-31 04:02:56 | INFO  | Flavor SCS-2V-16 created 2026-03-31 04:02:58.960522 | orchestrator | 2026-03-31 04:02:56 | INFO  | Flavor SCS-2V-16-50 created 2026-03-31 04:02:58.960527 | orchestrator | 2026-03-31 04:02:56 | INFO  | Flavor SCS-4V-8 created 2026-03-31 04:02:58.960532 | orchestrator | 2026-03-31 04:02:56 | INFO  | Flavor SCS-4V-8-20 created 2026-03-31 04:02:58.960536 | orchestrator | 2026-03-31 04:02:56 | INFO  | Flavor SCS-4V-16 created 2026-03-31 04:02:58.960541 | orchestrator | 2026-03-31 04:02:57 | INFO  | Flavor SCS-4V-16-50 created 2026-03-31 04:02:58.960546 | orchestrator | 2026-03-31 04:02:57 | INFO  | Flavor SCS-4V-32 created 2026-03-31 04:02:58.960551 | orchestrator | 2026-03-31 04:02:57 | INFO  | Flavor SCS-4V-32-100 created 2026-03-31 04:02:58.960556 | orchestrator | 2026-03-31 04:02:57 | INFO  | Flavor SCS-8V-16 created 2026-03-31 04:02:58.960561 | orchestrator | 2026-03-31 04:02:57 | INFO  | Flavor SCS-8V-16-50 created 2026-03-31 04:02:58.960566 | orchestrator | 2026-03-31 04:02:57 | INFO  | Flavor SCS-8V-32 created 2026-03-31 04:02:58.960571 | orchestrator | 2026-03-31 04:02:57 | INFO  | Flavor SCS-8V-32-100 created 2026-03-31 04:02:58.960576 | orchestrator | 2026-03-31 04:02:58 | INFO  | Flavor SCS-16V-32 created 2026-03-31 04:02:58.960581 | orchestrator | 2026-03-31 04:02:58 | INFO  | Flavor SCS-16V-32-100 created 2026-03-31 04:02:58.960586 | orchestrator | 2026-03-31 04:02:58 | INFO  | Flavor SCS-2V-4-20s created 2026-03-31 04:02:58.960591 | orchestrator | 2026-03-31 04:02:58 | INFO  | Flavor SCS-4V-8-50s created 2026-03-31 04:02:58.960596 | orchestrator | 2026-03-31 04:02:58 | INFO  | Flavor SCS-8V-32-100s created 2026-03-31 04:03:01.576911 | orchestrator | 2026-03-31 04:03:01 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-03-31 04:03:11.762941 | orchestrator | 2026-03-31 04:03:11 | INFO  | Task aeff0d97-cf07-4c28-9d9e-d27938137314 (bootstrap-basic) was prepared for execution. 2026-03-31 04:03:11.763084 | orchestrator | 2026-03-31 04:03:11 | INFO  | It takes a moment until task aeff0d97-cf07-4c28-9d9e-d27938137314 (bootstrap-basic) has been started and output is visible here. 2026-03-31 04:03:58.957261 | orchestrator | 2026-03-31 04:03:58.957343 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-03-31 04:03:58.957350 | orchestrator | 2026-03-31 04:03:58.957355 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-31 04:03:58.957360 | orchestrator | Tuesday 31 March 2026 04:03:16 +0000 (0:00:00.081) 0:00:00.081 ********* 2026-03-31 04:03:58.957365 | orchestrator | ok: [localhost] 2026-03-31 04:03:58.957369 | orchestrator | 2026-03-31 04:03:58.957373 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-03-31 04:03:58.957377 | orchestrator | Tuesday 31 March 2026 04:03:18 +0000 (0:00:02.065) 0:00:02.147 ********* 2026-03-31 04:03:58.957381 | orchestrator | ok: [localhost] 2026-03-31 04:03:58.957385 | orchestrator | 2026-03-31 04:03:58.957389 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-03-31 04:03:58.957393 | orchestrator | Tuesday 31 March 2026 04:03:26 +0000 (0:00:07.518) 0:00:09.665 ********* 2026-03-31 04:03:58.957397 | orchestrator | changed: [localhost] 2026-03-31 04:03:58.957401 | orchestrator | 2026-03-31 04:03:58.957405 | orchestrator | TASK [Create public network] *************************************************** 2026-03-31 04:03:58.957409 | orchestrator | Tuesday 31 March 2026 04:03:32 +0000 (0:00:06.853) 0:00:16.519 ********* 2026-03-31 04:03:58.957413 | orchestrator | changed: [localhost] 2026-03-31 04:03:58.957417 | orchestrator | 2026-03-31 04:03:58.957421 | orchestrator | TASK [Set public network to default] ******************************************* 2026-03-31 04:03:58.957425 | orchestrator | Tuesday 31 March 2026 04:03:38 +0000 (0:00:05.945) 0:00:22.465 ********* 2026-03-31 04:03:58.957432 | orchestrator | changed: [localhost] 2026-03-31 04:03:58.957436 | orchestrator | 2026-03-31 04:03:58.957440 | orchestrator | TASK [Create public subnet] **************************************************** 2026-03-31 04:03:58.957444 | orchestrator | Tuesday 31 March 2026 04:03:45 +0000 (0:00:07.008) 0:00:29.473 ********* 2026-03-31 04:03:58.957448 | orchestrator | changed: [localhost] 2026-03-31 04:03:58.957452 | orchestrator | 2026-03-31 04:03:58.957456 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-03-31 04:03:58.957460 | orchestrator | Tuesday 31 March 2026 04:03:50 +0000 (0:00:04.742) 0:00:34.216 ********* 2026-03-31 04:03:58.957463 | orchestrator | changed: [localhost] 2026-03-31 04:03:58.957467 | orchestrator | 2026-03-31 04:03:58.957471 | orchestrator | TASK [Create manager role] ***************************************************** 2026-03-31 04:03:58.957481 | orchestrator | Tuesday 31 March 2026 04:03:54 +0000 (0:00:04.200) 0:00:38.416 ********* 2026-03-31 04:03:58.957485 | orchestrator | ok: [localhost] 2026-03-31 04:03:58.957489 | orchestrator | 2026-03-31 04:03:58.957493 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 04:03:58.957497 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 04:03:58.957502 | orchestrator | 2026-03-31 04:03:58.957506 | orchestrator | 2026-03-31 04:03:58.957510 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 04:03:58.957513 | orchestrator | Tuesday 31 March 2026 04:03:58 +0000 (0:00:03.891) 0:00:42.308 ********* 2026-03-31 04:03:58.957517 | orchestrator | =============================================================================== 2026-03-31 04:03:58.957521 | orchestrator | Get volume type LUKS ---------------------------------------------------- 7.52s 2026-03-31 04:03:58.957525 | orchestrator | Set public network to default ------------------------------------------- 7.01s 2026-03-31 04:03:58.957529 | orchestrator | Create volume type LUKS ------------------------------------------------- 6.85s 2026-03-31 04:03:58.957532 | orchestrator | Create public network --------------------------------------------------- 5.95s 2026-03-31 04:03:58.957552 | orchestrator | Create public subnet ---------------------------------------------------- 4.74s 2026-03-31 04:03:58.957556 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.20s 2026-03-31 04:03:58.957560 | orchestrator | Create manager role ----------------------------------------------------- 3.89s 2026-03-31 04:03:58.957564 | orchestrator | Gathering Facts --------------------------------------------------------- 2.07s 2026-03-31 04:04:01.798590 | orchestrator | 2026-03-31 04:04:01 | INFO  | It takes a moment until task 9cbaa945-5c9f-441d-9f8f-363b7fa1131d (image-manager) has been started and output is visible here. 2026-03-31 04:04:43.283324 | orchestrator | 2026-03-31 04:04:04 | INFO  | Processing image 'Cirros 0.6.2' 2026-03-31 04:04:43.283404 | orchestrator | 2026-03-31 04:04:04 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-03-31 04:04:43.283411 | orchestrator | 2026-03-31 04:04:04 | INFO  | Importing image Cirros 0.6.2 2026-03-31 04:04:43.283416 | orchestrator | 2026-03-31 04:04:04 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-31 04:04:43.283421 | orchestrator | 2026-03-31 04:04:06 | INFO  | Waiting for image to leave queued state... 2026-03-31 04:04:43.283426 | orchestrator | 2026-03-31 04:04:08 | INFO  | Waiting for import to complete... 2026-03-31 04:04:43.283430 | orchestrator | 2026-03-31 04:04:19 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-03-31 04:04:43.283435 | orchestrator | 2026-03-31 04:04:19 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-03-31 04:04:43.283439 | orchestrator | 2026-03-31 04:04:19 | INFO  | Setting internal_version = 0.6.2 2026-03-31 04:04:43.283444 | orchestrator | 2026-03-31 04:04:19 | INFO  | Setting image_original_user = cirros 2026-03-31 04:04:43.283448 | orchestrator | 2026-03-31 04:04:19 | INFO  | Adding tag os:cirros 2026-03-31 04:04:43.283452 | orchestrator | 2026-03-31 04:04:19 | INFO  | Setting property architecture: x86_64 2026-03-31 04:04:43.283456 | orchestrator | 2026-03-31 04:04:19 | INFO  | Setting property hw_disk_bus: scsi 2026-03-31 04:04:43.283459 | orchestrator | 2026-03-31 04:04:20 | INFO  | Setting property hw_rng_model: virtio 2026-03-31 04:04:43.283464 | orchestrator | 2026-03-31 04:04:20 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-31 04:04:43.283468 | orchestrator | 2026-03-31 04:04:20 | INFO  | Setting property hw_watchdog_action: reset 2026-03-31 04:04:43.283472 | orchestrator | 2026-03-31 04:04:20 | INFO  | Setting property hypervisor_type: qemu 2026-03-31 04:04:43.283475 | orchestrator | 2026-03-31 04:04:21 | INFO  | Setting property os_distro: cirros 2026-03-31 04:04:43.283479 | orchestrator | 2026-03-31 04:04:21 | INFO  | Setting property os_purpose: minimal 2026-03-31 04:04:43.283483 | orchestrator | 2026-03-31 04:04:21 | INFO  | Setting property replace_frequency: never 2026-03-31 04:04:43.283487 | orchestrator | 2026-03-31 04:04:21 | INFO  | Setting property uuid_validity: none 2026-03-31 04:04:43.283490 | orchestrator | 2026-03-31 04:04:21 | INFO  | Setting property provided_until: none 2026-03-31 04:04:43.283494 | orchestrator | 2026-03-31 04:04:22 | INFO  | Setting property image_description: Cirros 2026-03-31 04:04:43.283498 | orchestrator | 2026-03-31 04:04:22 | INFO  | Setting property image_name: Cirros 2026-03-31 04:04:43.283502 | orchestrator | 2026-03-31 04:04:22 | INFO  | Setting property internal_version: 0.6.2 2026-03-31 04:04:43.283505 | orchestrator | 2026-03-31 04:04:22 | INFO  | Setting property image_original_user: cirros 2026-03-31 04:04:43.283525 | orchestrator | 2026-03-31 04:04:23 | INFO  | Setting property os_version: 0.6.2 2026-03-31 04:04:43.283533 | orchestrator | 2026-03-31 04:04:23 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-31 04:04:43.283539 | orchestrator | 2026-03-31 04:04:23 | INFO  | Setting property image_build_date: 2023-05-30 2026-03-31 04:04:43.283543 | orchestrator | 2026-03-31 04:04:24 | INFO  | Checking status of 'Cirros 0.6.2' 2026-03-31 04:04:43.283547 | orchestrator | 2026-03-31 04:04:24 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-03-31 04:04:43.283550 | orchestrator | 2026-03-31 04:04:24 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-03-31 04:04:43.283554 | orchestrator | 2026-03-31 04:04:24 | INFO  | Processing image 'Cirros 0.6.3' 2026-03-31 04:04:43.283560 | orchestrator | 2026-03-31 04:04:24 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-03-31 04:04:43.283564 | orchestrator | 2026-03-31 04:04:24 | INFO  | Importing image Cirros 0.6.3 2026-03-31 04:04:43.283568 | orchestrator | 2026-03-31 04:04:24 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-31 04:04:43.283572 | orchestrator | 2026-03-31 04:04:24 | INFO  | Waiting for image to leave queued state... 2026-03-31 04:04:43.283575 | orchestrator | 2026-03-31 04:04:26 | INFO  | Waiting for import to complete... 2026-03-31 04:04:43.283589 | orchestrator | 2026-03-31 04:04:37 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-03-31 04:04:43.283594 | orchestrator | 2026-03-31 04:04:37 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-03-31 04:04:43.283600 | orchestrator | 2026-03-31 04:04:37 | INFO  | Setting internal_version = 0.6.3 2026-03-31 04:04:43.283605 | orchestrator | 2026-03-31 04:04:37 | INFO  | Setting image_original_user = cirros 2026-03-31 04:04:43.283611 | orchestrator | 2026-03-31 04:04:37 | INFO  | Adding tag os:cirros 2026-03-31 04:04:43.283617 | orchestrator | 2026-03-31 04:04:37 | INFO  | Setting property architecture: x86_64 2026-03-31 04:04:43.283622 | orchestrator | 2026-03-31 04:04:38 | INFO  | Setting property hw_disk_bus: scsi 2026-03-31 04:04:43.283628 | orchestrator | 2026-03-31 04:04:38 | INFO  | Setting property hw_rng_model: virtio 2026-03-31 04:04:43.283633 | orchestrator | 2026-03-31 04:04:38 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-31 04:04:43.283639 | orchestrator | 2026-03-31 04:04:38 | INFO  | Setting property hw_watchdog_action: reset 2026-03-31 04:04:43.283644 | orchestrator | 2026-03-31 04:04:38 | INFO  | Setting property hypervisor_type: qemu 2026-03-31 04:04:43.283650 | orchestrator | 2026-03-31 04:04:39 | INFO  | Setting property os_distro: cirros 2026-03-31 04:04:43.283655 | orchestrator | 2026-03-31 04:04:39 | INFO  | Setting property os_purpose: minimal 2026-03-31 04:04:43.283661 | orchestrator | 2026-03-31 04:04:39 | INFO  | Setting property replace_frequency: never 2026-03-31 04:04:43.283667 | orchestrator | 2026-03-31 04:04:39 | INFO  | Setting property uuid_validity: none 2026-03-31 04:04:43.283673 | orchestrator | 2026-03-31 04:04:40 | INFO  | Setting property provided_until: none 2026-03-31 04:04:43.283678 | orchestrator | 2026-03-31 04:04:40 | INFO  | Setting property image_description: Cirros 2026-03-31 04:04:43.283684 | orchestrator | 2026-03-31 04:04:40 | INFO  | Setting property image_name: Cirros 2026-03-31 04:04:43.283690 | orchestrator | 2026-03-31 04:04:41 | INFO  | Setting property internal_version: 0.6.3 2026-03-31 04:04:43.283739 | orchestrator | 2026-03-31 04:04:41 | INFO  | Setting property image_original_user: cirros 2026-03-31 04:04:43.283744 | orchestrator | 2026-03-31 04:04:41 | INFO  | Setting property os_version: 0.6.3 2026-03-31 04:04:43.283748 | orchestrator | 2026-03-31 04:04:41 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-31 04:04:43.283752 | orchestrator | 2026-03-31 04:04:42 | INFO  | Setting property image_build_date: 2024-09-26 2026-03-31 04:04:43.283755 | orchestrator | 2026-03-31 04:04:42 | INFO  | Checking status of 'Cirros 0.6.3' 2026-03-31 04:04:43.283759 | orchestrator | 2026-03-31 04:04:42 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-03-31 04:04:43.283763 | orchestrator | 2026-03-31 04:04:42 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-03-31 04:04:43.699328 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-03-31 04:04:46.329071 | orchestrator | 2026-03-31 04:04:46 | INFO  | date: 2026-03-31 2026-03-31 04:04:46.329168 | orchestrator | 2026-03-31 04:04:46 | INFO  | image: octavia-amphora-haproxy-2024.2.20260331.qcow2 2026-03-31 04:04:46.329207 | orchestrator | 2026-03-31 04:04:46 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260331.qcow2 2026-03-31 04:04:46.329222 | orchestrator | 2026-03-31 04:04:46 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260331.qcow2.CHECKSUM 2026-03-31 04:04:46.501309 | orchestrator | 2026-03-31 04:04:46 | INFO  | checksum: 33630ba9835553aced9843ce59b3bc858c14b7b6435c13c6fc8d4044f883dda4 2026-03-31 04:04:46.598264 | orchestrator | 2026-03-31 04:04:46 | INFO  | It takes a moment until task ba4f8d64-b3b7-4e09-8426-e76919d332bd (image-manager) has been started and output is visible here. 2026-03-31 04:05:59.422863 | orchestrator | 2026-03-31 04:04:49 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-03-31' 2026-03-31 04:05:59.422983 | orchestrator | 2026-03-31 04:04:49 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260331.qcow2: 200 2026-03-31 04:05:59.423001 | orchestrator | 2026-03-31 04:04:49 | INFO  | Importing image OpenStack Octavia Amphora 2026-03-31 2026-03-31 04:05:59.423028 | orchestrator | 2026-03-31 04:04:49 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260331.qcow2 2026-03-31 04:05:59.423040 | orchestrator | 2026-03-31 04:04:50 | INFO  | Waiting for image to leave queued state... 2026-03-31 04:05:59.423051 | orchestrator | 2026-03-31 04:04:52 | INFO  | Waiting for import to complete... 2026-03-31 04:05:59.423063 | orchestrator | 2026-03-31 04:05:02 | INFO  | Waiting for import to complete... 2026-03-31 04:05:59.423074 | orchestrator | 2026-03-31 04:05:12 | INFO  | Waiting for import to complete... 2026-03-31 04:05:59.423085 | orchestrator | 2026-03-31 04:05:22 | INFO  | Waiting for import to complete... 2026-03-31 04:05:59.423098 | orchestrator | 2026-03-31 04:05:33 | INFO  | Waiting for import to complete... 2026-03-31 04:05:59.423110 | orchestrator | 2026-03-31 04:05:43 | INFO  | Waiting for import to complete... 2026-03-31 04:05:59.423121 | orchestrator | 2026-03-31 04:05:53 | INFO  | Import of 'OpenStack Octavia Amphora 2026-03-31' successfully completed, reloading images 2026-03-31 04:05:59.423133 | orchestrator | 2026-03-31 04:05:53 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-03-31' 2026-03-31 04:05:59.423173 | orchestrator | 2026-03-31 04:05:53 | INFO  | Setting internal_version = 2026-03-31 2026-03-31 04:05:59.423185 | orchestrator | 2026-03-31 04:05:53 | INFO  | Setting image_original_user = ubuntu 2026-03-31 04:05:59.423196 | orchestrator | 2026-03-31 04:05:53 | INFO  | Adding tag amphora 2026-03-31 04:05:59.423207 | orchestrator | 2026-03-31 04:05:54 | INFO  | Adding tag os:ubuntu 2026-03-31 04:05:59.423218 | orchestrator | 2026-03-31 04:05:54 | INFO  | Setting property architecture: x86_64 2026-03-31 04:05:59.423229 | orchestrator | 2026-03-31 04:05:54 | INFO  | Setting property hw_disk_bus: scsi 2026-03-31 04:05:59.423239 | orchestrator | 2026-03-31 04:05:54 | INFO  | Setting property hw_rng_model: virtio 2026-03-31 04:05:59.423250 | orchestrator | 2026-03-31 04:05:55 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-31 04:05:59.423261 | orchestrator | 2026-03-31 04:05:55 | INFO  | Setting property hw_watchdog_action: reset 2026-03-31 04:05:59.423272 | orchestrator | 2026-03-31 04:05:55 | INFO  | Setting property hypervisor_type: qemu 2026-03-31 04:05:59.423282 | orchestrator | 2026-03-31 04:05:55 | INFO  | Setting property os_distro: ubuntu 2026-03-31 04:05:59.423293 | orchestrator | 2026-03-31 04:05:56 | INFO  | Setting property replace_frequency: quarterly 2026-03-31 04:05:59.423304 | orchestrator | 2026-03-31 04:05:56 | INFO  | Setting property uuid_validity: last-1 2026-03-31 04:05:59.423314 | orchestrator | 2026-03-31 04:05:56 | INFO  | Setting property provided_until: none 2026-03-31 04:05:59.423325 | orchestrator | 2026-03-31 04:05:56 | INFO  | Setting property os_purpose: network 2026-03-31 04:05:59.423351 | orchestrator | 2026-03-31 04:05:57 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-03-31 04:05:59.423362 | orchestrator | 2026-03-31 04:05:57 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-03-31 04:05:59.423373 | orchestrator | 2026-03-31 04:05:57 | INFO  | Setting property internal_version: 2026-03-31 2026-03-31 04:05:59.423384 | orchestrator | 2026-03-31 04:05:57 | INFO  | Setting property image_original_user: ubuntu 2026-03-31 04:05:59.423397 | orchestrator | 2026-03-31 04:05:58 | INFO  | Setting property os_version: 2026-03-31 2026-03-31 04:05:59.423410 | orchestrator | 2026-03-31 04:05:58 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260331.qcow2 2026-03-31 04:05:59.423423 | orchestrator | 2026-03-31 04:05:58 | INFO  | Setting property image_build_date: 2026-03-31 2026-03-31 04:05:59.423435 | orchestrator | 2026-03-31 04:05:58 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-03-31' 2026-03-31 04:05:59.423447 | orchestrator | 2026-03-31 04:05:58 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-03-31' 2026-03-31 04:05:59.423477 | orchestrator | 2026-03-31 04:05:59 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-03-31 04:05:59.423490 | orchestrator | 2026-03-31 04:05:59 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-03-31 04:05:59.423504 | orchestrator | 2026-03-31 04:05:59 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-03-31 04:05:59.423515 | orchestrator | 2026-03-31 04:05:59 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-03-31 04:06:00.037122 | orchestrator | ok: Runtime: 0:03:11.626041 2026-03-31 04:06:00.055612 | 2026-03-31 04:06:00.055779 | TASK [Run checks] 2026-03-31 04:06:00.825276 | orchestrator | + set -e 2026-03-31 04:06:00.825444 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-31 04:06:00.825465 | orchestrator | ++ export INTERACTIVE=false 2026-03-31 04:06:00.825481 | orchestrator | ++ INTERACTIVE=false 2026-03-31 04:06:00.825492 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-31 04:06:00.825501 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-31 04:06:00.825511 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-31 04:06:00.825687 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-31 04:06:00.829440 | orchestrator | 2026-03-31 04:06:00.829507 | orchestrator | # CHECK 2026-03-31 04:06:00.829519 | orchestrator | 2026-03-31 04:06:00.829529 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-31 04:06:00.829541 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-31 04:06:00.829549 | orchestrator | + echo 2026-03-31 04:06:00.829557 | orchestrator | + echo '# CHECK' 2026-03-31 04:06:00.829566 | orchestrator | + echo 2026-03-31 04:06:00.829578 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-31 04:06:00.830463 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-31 04:06:00.905217 | orchestrator | 2026-03-31 04:06:00.905301 | orchestrator | ## Containers @ testbed-manager 2026-03-31 04:06:00.905312 | orchestrator | 2026-03-31 04:06:00.905330 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-31 04:06:00.905337 | orchestrator | + echo 2026-03-31 04:06:00.905344 | orchestrator | + echo '## Containers @ testbed-manager' 2026-03-31 04:06:00.905353 | orchestrator | + echo 2026-03-31 04:06:00.905361 | orchestrator | + osism container testbed-manager ps 2026-03-31 04:06:03.340423 | orchestrator | 2026-03-31 04:06:03 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-03-31 04:06:03.750861 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-31 04:06:03.750977 | orchestrator | 412e98401f89 registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_blackbox_exporter 2026-03-31 04:06:03.750998 | orchestrator | 5b94205373e5 registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_alertmanager 2026-03-31 04:06:03.751009 | orchestrator | 2f0591bfb19f registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-03-31 04:06:03.751018 | orchestrator | 0a76a62b4e06 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-03-31 04:06:03.751028 | orchestrator | f96fa6d1f761 registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_server 2026-03-31 04:06:03.751041 | orchestrator | 6a06ac531040 registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 59 minutes ago Up 59 minutes cephclient 2026-03-31 04:06:03.751051 | orchestrator | 810e072c2cdc registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-03-31 04:06:03.751060 | orchestrator | 7b8cad406fe1 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-03-31 04:06:03.751094 | orchestrator | 88eefd52ba11 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-03-31 04:06:03.751104 | orchestrator | d41bc1e94533 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 2 hours ago Up 2 hours openstackclient 2026-03-31 04:06:03.751113 | orchestrator | 179044d5392a phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 2 hours ago Up 2 hours (healthy) 80/tcp phpmyadmin 2026-03-31 04:06:03.751122 | orchestrator | 4155ac5a350d registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 2 hours ago Up 2 hours (healthy) 8080/tcp homer 2026-03-31 04:06:03.751131 | orchestrator | e42870d06f47 registry.osism.tech/osism/cgit:1.2.3 "httpd-foreground" 2 hours ago Up 2 hours 80/tcp cgit 2026-03-31 04:06:03.751140 | orchestrator | baba04571fd2 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:3128->3128/tcp squid 2026-03-31 04:06:03.751168 | orchestrator | 741b93ca9977 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" 2 hours ago Up 2 hours (healthy) manager-inventory_reconciler-1 2026-03-31 04:06:03.751185 | orchestrator | 4f82c4066ff1 registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) ceph-ansible 2026-03-31 04:06:03.751198 | orchestrator | ce445d473d94 registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-kubernetes 2026-03-31 04:06:03.751213 | orchestrator | 924c4e7b5039 registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-ansible 2026-03-31 04:06:03.751227 | orchestrator | cad9a21be9d7 registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) kolla-ansible 2026-03-31 04:06:03.751243 | orchestrator | 557eed7c4de9 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 2 hours ago Up 2 hours (healthy) 8000/tcp manager-ara-server-1 2026-03-31 04:06:03.751256 | orchestrator | 874810276b1c registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-03-31 04:06:03.751271 | orchestrator | ff9026a85261 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-listener-1 2026-03-31 04:06:03.751295 | orchestrator | fb86fad20452 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 3306/tcp manager-mariadb-1 2026-03-31 04:06:03.751311 | orchestrator | b1e9e274de8b registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" 2 hours ago Up 2 hours (healthy) osismclient 2026-03-31 04:06:03.751327 | orchestrator | 62140ab71409 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 6379/tcp manager-redis-1 2026-03-31 04:06:03.751342 | orchestrator | 2bc66b243af6 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-flower-1 2026-03-31 04:06:03.751358 | orchestrator | 2cb50a0d34cd registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-openstack-1 2026-03-31 04:06:03.751368 | orchestrator | 790b88ee18db registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-beat-1 2026-03-31 04:06:03.751377 | orchestrator | 2a2f05a3d3f0 registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" 2 hours ago Up 2 hours 192.168.16.5:3000->3000/tcp osism-frontend 2026-03-31 04:06:03.751391 | orchestrator | 4a15997d3155 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-03-31 04:06:04.108843 | orchestrator | 2026-03-31 04:06:04.108949 | orchestrator | ## Images @ testbed-manager 2026-03-31 04:06:04.108965 | orchestrator | 2026-03-31 04:06:04.108977 | orchestrator | + echo 2026-03-31 04:06:04.108989 | orchestrator | + echo '## Images @ testbed-manager' 2026-03-31 04:06:04.109001 | orchestrator | + echo 2026-03-31 04:06:04.109017 | orchestrator | + osism container testbed-manager images 2026-03-31 04:06:06.606304 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-31 04:06:06.606422 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 8b69a1d0123c 24 hours ago 239MB 2026-03-31 04:06:06.606438 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 2 months ago 41.4MB 2026-03-31 04:06:06.606465 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 3 months ago 11.5MB 2026-03-31 04:06:06.606476 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20251130.0 0f140ec71e5f 4 months ago 608MB 2026-03-31 04:06:06.606488 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-03-31 04:06:06.606499 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-03-31 04:06:06.606510 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-03-31 04:06:06.606524 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20251130 7bbb4f6f4831 4 months ago 308MB 2026-03-31 04:06:06.606535 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-03-31 04:06:06.606571 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20251130 ba994ea4acda 4 months ago 404MB 2026-03-31 04:06:06.606583 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20251130 56b43d5c716a 4 months ago 839MB 2026-03-31 04:06:06.606614 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-03-31 04:06:06.606625 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20251130.0 1bfc1dadeee1 4 months ago 330MB 2026-03-31 04:06:06.606636 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20251130.0 42988b2d229c 4 months ago 613MB 2026-03-31 04:06:06.606647 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20251130.0 a212d8ca4a50 4 months ago 560MB 2026-03-31 04:06:06.606658 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20251130.0 9beff03cb77b 4 months ago 1.23GB 2026-03-31 04:06:06.606668 | orchestrator | registry.osism.tech/osism/osism 0.20251130.1 95213af683ec 4 months ago 383MB 2026-03-31 04:06:06.606679 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20251130.1 2cb6e7609620 4 months ago 238MB 2026-03-31 04:06:06.606691 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 4 months ago 334MB 2026-03-31 04:06:06.606701 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 5 months ago 742MB 2026-03-31 04:06:06.606712 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 7 months ago 275MB 2026-03-31 04:06:06.606723 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 8 months ago 226MB 2026-03-31 04:06:06.606733 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 10 months ago 453MB 2026-03-31 04:06:06.606744 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 21 months ago 146MB 2026-03-31 04:06:06.606755 | orchestrator | registry.osism.tech/osism/cgit 1.2.3 16e7285642b1 2 years ago 545MB 2026-03-31 04:06:06.964437 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-31 04:06:06.964553 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-31 04:06:07.031177 | orchestrator | 2026-03-31 04:06:07.031258 | orchestrator | ## Containers @ testbed-node-0 2026-03-31 04:06:07.031298 | orchestrator | 2026-03-31 04:06:07.031304 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-31 04:06:07.031308 | orchestrator | + echo 2026-03-31 04:06:07.031314 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-03-31 04:06:07.031319 | orchestrator | + echo 2026-03-31 04:06:07.031324 | orchestrator | + osism container testbed-node-0 ps 2026-03-31 04:06:09.619647 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-31 04:06:09.619731 | orchestrator | e2eb935e248e registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-03-31 04:06:09.619756 | orchestrator | 3a7c4980486b registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-03-31 04:06:09.619764 | orchestrator | 930ed080470d registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2026-03-31 04:06:09.619770 | orchestrator | 37bb8d17b8f4 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-03-31 04:06:09.619795 | orchestrator | 387f3746b6ed registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-03-31 04:06:09.619801 | orchestrator | b04b19aa52c3 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-03-31 04:06:09.619812 | orchestrator | c988612b30aa registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-03-31 04:06:09.619819 | orchestrator | c5c4fb665f8e registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-03-31 04:06:09.619825 | orchestrator | d8996539908d registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-03-31 04:06:09.619832 | orchestrator | 250dbe483027 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_scheduler 2026-03-31 04:06:09.619838 | orchestrator | 0c2916f4b5f2 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-03-31 04:06:09.619845 | orchestrator | 807121a320e9 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-03-31 04:06:09.619851 | orchestrator | 071370add771 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-03-31 04:06:09.619857 | orchestrator | 1ff8b211c7cf registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-03-31 04:06:09.619863 | orchestrator | fec3dae68d8a registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-03-31 04:06:09.619869 | orchestrator | 676c5ee37bcf registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-03-31 04:06:09.619876 | orchestrator | 99de84b0bdf5 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-03-31 04:06:09.619882 | orchestrator | 75c48bf0f536 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-03-31 04:06:09.619888 | orchestrator | b2b7b56991bb registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_worker 2026-03-31 04:06:09.619912 | orchestrator | e07a2e88019f registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_housekeeping 2026-03-31 04:06:09.619919 | orchestrator | 23614b680302 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-03-31 04:06:09.619925 | orchestrator | fa15cc9cedfc registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-03-31 04:06:09.619937 | orchestrator | b80d7c0d6ea5 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-03-31 04:06:09.619943 | orchestrator | 40ca7b252979 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 26 minutes (healthy) designate_worker 2026-03-31 04:06:09.619949 | orchestrator | 2b43eae9929c registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_mdns 2026-03-31 04:06:09.619960 | orchestrator | febf4ebb941f registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-03-31 04:06:09.619966 | orchestrator | ec28da70768f registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-03-31 04:06:09.619972 | orchestrator | f341193122a9 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-03-31 04:06:09.619979 | orchestrator | 4652b55cc9e1 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-03-31 04:06:09.619985 | orchestrator | 81a5c8fae69b registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-03-31 04:06:09.619991 | orchestrator | 8f622492f5b6 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-03-31 04:06:09.619998 | orchestrator | dd121334054a registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-03-31 04:06:09.620004 | orchestrator | 2b83625d2709 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_backup 2026-03-31 04:06:09.620010 | orchestrator | 659b34f14811 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_volume 2026-03-31 04:06:09.620016 | orchestrator | 6c87b3722aca registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_scheduler 2026-03-31 04:06:09.620023 | orchestrator | 674f41f4f161 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_api 2026-03-31 04:06:09.620029 | orchestrator | 613017e23af0 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) glance_api 2026-03-31 04:06:09.620035 | orchestrator | 182f1da06674 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_console 2026-03-31 04:06:09.620041 | orchestrator | 130ed3ff8290 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_apiserver 2026-03-31 04:06:09.620053 | orchestrator | 5db14e075a6e registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) horizon 2026-03-31 04:06:09.620064 | orchestrator | 90b0f4c186bd registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_novncproxy 2026-03-31 04:06:09.620071 | orchestrator | 45b670ba0eb8 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_conductor 2026-03-31 04:06:09.620080 | orchestrator | cf0e67f976d1 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_api 2026-03-31 04:06:09.620087 | orchestrator | e2bd1d7bbcf1 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_scheduler 2026-03-31 04:06:09.620093 | orchestrator | 4dcc625e7bb0 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 49 minutes ago Up 49 minutes (healthy) neutron_server 2026-03-31 04:06:09.620099 | orchestrator | c8d470625a34 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) placement_api 2026-03-31 04:06:09.620105 | orchestrator | 590920b9072c registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone 2026-03-31 04:06:09.620111 | orchestrator | 647b02ca3ece registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone_fernet 2026-03-31 04:06:09.620118 | orchestrator | f373a6594a4b registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 56 minutes ago Up 56 minutes (healthy) keystone_ssh 2026-03-31 04:06:09.620124 | orchestrator | ed3e3b7a9681 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 58 minutes ago Up 58 minutes ceph-mgr-testbed-node-0 2026-03-31 04:06:09.620130 | orchestrator | a0cbaf0f5256 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-0 2026-03-31 04:06:09.620136 | orchestrator | 80cb11f76dbe registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-0 2026-03-31 04:06:09.620142 | orchestrator | a37e78a42f8d registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-03-31 04:06:09.620148 | orchestrator | 1f06538471e2 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-03-31 04:06:09.620155 | orchestrator | 5028ef138934 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-03-31 04:06:09.620161 | orchestrator | 2c61e9e06927 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-03-31 04:06:09.620170 | orchestrator | c279bd6e0f80 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-03-31 04:06:09.620177 | orchestrator | 2c66f8f68701 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-03-31 04:06:09.620187 | orchestrator | 79fd1d167548 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-03-31 04:06:09.620197 | orchestrator | 7ccf5cb42b7c registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-03-31 04:06:09.620203 | orchestrator | fd351933670a registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-03-31 04:06:09.620210 | orchestrator | 88597d97fcb0 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-03-31 04:06:09.620216 | orchestrator | db003431a2ad registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-03-31 04:06:09.620222 | orchestrator | 9ac0cf94ca5b registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch_dashboards 2026-03-31 04:06:09.620228 | orchestrator | 8c982e26bb25 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch 2026-03-31 04:06:09.620234 | orchestrator | b285e0b2311f registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-03-31 04:06:09.620240 | orchestrator | 35ab47264f1b registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-03-31 04:06:09.620246 | orchestrator | b5806f823e1b registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-03-31 04:06:09.620253 | orchestrator | ada4b0557eab registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-03-31 04:06:09.620259 | orchestrator | 6d091bd308b4 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-03-31 04:06:09.620265 | orchestrator | a9fba281daae registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-03-31 04:06:10.028060 | orchestrator | 2026-03-31 04:06:10.028186 | orchestrator | ## Images @ testbed-node-0 2026-03-31 04:06:10.028206 | orchestrator | 2026-03-31 04:06:10.028216 | orchestrator | + echo 2026-03-31 04:06:10.028227 | orchestrator | + echo '## Images @ testbed-node-0' 2026-03-31 04:06:10.028238 | orchestrator | + echo 2026-03-31 04:06:10.028247 | orchestrator | + osism container testbed-node-0 images 2026-03-31 04:06:12.678979 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-31 04:06:12.679102 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 4 months ago 322MB 2026-03-31 04:06:12.679116 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 4 months ago 266MB 2026-03-31 04:06:12.679126 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 4 months ago 1.56GB 2026-03-31 04:06:12.679134 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 4 months ago 276MB 2026-03-31 04:06:12.679189 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 4 months ago 1.53GB 2026-03-31 04:06:12.679198 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-03-31 04:06:12.679206 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-03-31 04:06:12.679214 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 4 months ago 1.02GB 2026-03-31 04:06:12.679222 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 4 months ago 412MB 2026-03-31 04:06:12.679229 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 4 months ago 274MB 2026-03-31 04:06:12.679237 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-03-31 04:06:12.679245 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 4 months ago 273MB 2026-03-31 04:06:12.679253 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 4 months ago 273MB 2026-03-31 04:06:12.679261 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 4 months ago 452MB 2026-03-31 04:06:12.679268 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 4 months ago 1.15GB 2026-03-31 04:06:12.679276 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 4 months ago 301MB 2026-03-31 04:06:12.679284 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 4 months ago 298MB 2026-03-31 04:06:12.679292 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-03-31 04:06:12.679300 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 4 months ago 292MB 2026-03-31 04:06:12.679308 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-03-31 04:06:12.679315 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 4 months ago 279MB 2026-03-31 04:06:12.679323 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 4 months ago 279MB 2026-03-31 04:06:12.679331 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 4 months ago 975MB 2026-03-31 04:06:12.679339 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 4 months ago 1.37GB 2026-03-31 04:06:12.679353 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 4 months ago 1.21GB 2026-03-31 04:06:12.679361 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 4 months ago 1.21GB 2026-03-31 04:06:12.679369 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 4 months ago 1.21GB 2026-03-31 04:06:12.679382 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 4 months ago 976MB 2026-03-31 04:06:12.679390 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 4 months ago 976MB 2026-03-31 04:06:12.679398 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 4 months ago 1.13GB 2026-03-31 04:06:12.679412 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 4 months ago 1.24GB 2026-03-31 04:06:12.679435 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 4 months ago 1.22GB 2026-03-31 04:06:12.679444 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 4 months ago 1.06GB 2026-03-31 04:06:12.679452 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 4 months ago 1.05GB 2026-03-31 04:06:12.679459 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 4 months ago 1.05GB 2026-03-31 04:06:12.679467 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 4 months ago 974MB 2026-03-31 04:06:12.679475 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 4 months ago 974MB 2026-03-31 04:06:12.679483 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 4 months ago 974MB 2026-03-31 04:06:12.679491 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 4 months ago 973MB 2026-03-31 04:06:12.679499 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 4 months ago 991MB 2026-03-31 04:06:12.679507 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 4 months ago 991MB 2026-03-31 04:06:12.679515 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 4 months ago 990MB 2026-03-31 04:06:12.679524 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 4 months ago 1.09GB 2026-03-31 04:06:12.679533 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 4 months ago 1.04GB 2026-03-31 04:06:12.679542 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 4 months ago 1.04GB 2026-03-31 04:06:12.679551 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 4 months ago 1.03GB 2026-03-31 04:06:12.679562 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 4 months ago 1.03GB 2026-03-31 04:06:12.679571 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 4 months ago 1.05GB 2026-03-31 04:06:12.679614 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 4 months ago 1.03GB 2026-03-31 04:06:12.679629 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 4 months ago 1.05GB 2026-03-31 04:06:12.679644 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 4 months ago 1.16GB 2026-03-31 04:06:12.679658 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 4 months ago 1.1GB 2026-03-31 04:06:12.679672 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 4 months ago 983MB 2026-03-31 04:06:12.679683 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 4 months ago 989MB 2026-03-31 04:06:12.679692 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 4 months ago 984MB 2026-03-31 04:06:12.679701 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 4 months ago 984MB 2026-03-31 04:06:12.679716 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 4 months ago 989MB 2026-03-31 04:06:12.679725 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 4 months ago 984MB 2026-03-31 04:06:12.679739 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 4 months ago 1.05GB 2026-03-31 04:06:12.679748 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 4 months ago 990MB 2026-03-31 04:06:12.679757 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 4 months ago 1.72GB 2026-03-31 04:06:12.679766 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 4 months ago 1.4GB 2026-03-31 04:06:12.679775 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 4 months ago 1.41GB 2026-03-31 04:06:12.679790 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 4 months ago 1.4GB 2026-03-31 04:06:12.679800 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 4 months ago 840MB 2026-03-31 04:06:12.679809 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 4 months ago 840MB 2026-03-31 04:06:12.679818 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 4 months ago 840MB 2026-03-31 04:06:12.679826 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 4 months ago 840MB 2026-03-31 04:06:12.679836 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 10 months ago 1.27GB 2026-03-31 04:06:13.120325 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-31 04:06:13.120741 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-31 04:06:13.166620 | orchestrator | 2026-03-31 04:06:13.166696 | orchestrator | ## Containers @ testbed-node-1 2026-03-31 04:06:13.166710 | orchestrator | 2026-03-31 04:06:13.166717 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-31 04:06:13.166724 | orchestrator | + echo 2026-03-31 04:06:13.166731 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-03-31 04:06:13.166738 | orchestrator | + echo 2026-03-31 04:06:13.166746 | orchestrator | + osism container testbed-node-1 ps 2026-03-31 04:06:15.643181 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-31 04:06:15.643292 | orchestrator | edd25fae1a7a registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-03-31 04:06:15.643310 | orchestrator | f42989d65b1b registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-03-31 04:06:15.643322 | orchestrator | 9cc243799a61 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-03-31 04:06:15.643334 | orchestrator | 6c8254693b20 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-03-31 04:06:15.643348 | orchestrator | d1026ca50f9e registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 9 minutes prometheus_cadvisor 2026-03-31 04:06:15.643360 | orchestrator | 27888bcddaa0 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-03-31 04:06:15.643394 | orchestrator | 22b531c3ce9b registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-03-31 04:06:15.643405 | orchestrator | ca8e2f65b5e7 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-03-31 04:06:15.643417 | orchestrator | fadbe057407f registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-03-31 04:06:15.643428 | orchestrator | 0e3d3baece0b registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_scheduler 2026-03-31 04:06:15.643439 | orchestrator | 141731c330a6 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-03-31 04:06:15.643450 | orchestrator | 8525171a89ed registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-03-31 04:06:15.643479 | orchestrator | 5b57de026e0f registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-03-31 04:06:15.643490 | orchestrator | 605bda12563a registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-03-31 04:06:15.643501 | orchestrator | c0bb57a6da4e registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-03-31 04:06:15.643512 | orchestrator | aa7a763b2e63 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-03-31 04:06:15.643523 | orchestrator | 7b8c669eec37 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-03-31 04:06:15.643534 | orchestrator | b16e373d08e8 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-03-31 04:06:15.643545 | orchestrator | 39819a7fcc2f registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_worker 2026-03-31 04:06:15.643574 | orchestrator | d9d6a86b406e registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_housekeeping 2026-03-31 04:06:15.643615 | orchestrator | b1e059a38faf registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-03-31 04:06:15.643626 | orchestrator | cc8f2ccb967a registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-03-31 04:06:15.643638 | orchestrator | a22852d9f9fc registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-03-31 04:06:15.643657 | orchestrator | 83ce44c18138 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_worker 2026-03-31 04:06:15.643668 | orchestrator | 52e70e506a84 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_mdns 2026-03-31 04:06:15.643679 | orchestrator | 9ca0977d32f1 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-03-31 04:06:15.643690 | orchestrator | ed46252736e7 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-03-31 04:06:15.643701 | orchestrator | 95d766ea7a89 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-03-31 04:06:15.643771 | orchestrator | 09b8bcf13971 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-03-31 04:06:15.643784 | orchestrator | 70a00f5d7dda registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-03-31 04:06:15.643797 | orchestrator | 80444b0ad80f registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-03-31 04:06:15.643812 | orchestrator | e41e504a43e1 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-03-31 04:06:15.643825 | orchestrator | 2a9073e478a1 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_backup 2026-03-31 04:06:15.643838 | orchestrator | 28557411b708 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_volume 2026-03-31 04:06:15.644005 | orchestrator | f9cd5dd11011 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_scheduler 2026-03-31 04:06:15.644034 | orchestrator | de42a4bdf461 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_api 2026-03-31 04:06:15.644050 | orchestrator | 545b0068a456 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) glance_api 2026-03-31 04:06:15.644072 | orchestrator | 57a493157762 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_console 2026-03-31 04:06:15.644085 | orchestrator | 3f98b7884d5c registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_apiserver 2026-03-31 04:06:15.644096 | orchestrator | 27404d0848f3 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) horizon 2026-03-31 04:06:15.644107 | orchestrator | f4d6693836e2 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_novncproxy 2026-03-31 04:06:15.644127 | orchestrator | c8626e760dd5 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_conductor 2026-03-31 04:06:15.644138 | orchestrator | 4c9dfaa91656 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_api 2026-03-31 04:06:15.644149 | orchestrator | 27da6b93c931 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_scheduler 2026-03-31 04:06:15.644160 | orchestrator | 86155ecb2067 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 49 minutes ago Up 49 minutes (healthy) neutron_server 2026-03-31 04:06:15.644171 | orchestrator | e75f9216139a registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) placement_api 2026-03-31 04:06:15.644182 | orchestrator | 7f4bb9c634a0 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone 2026-03-31 04:06:15.644193 | orchestrator | ce490b378421 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone_fernet 2026-03-31 04:06:15.644204 | orchestrator | e15661bcef8e registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone_ssh 2026-03-31 04:06:15.644215 | orchestrator | ae01573fae06 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 58 minutes ago Up 58 minutes ceph-mgr-testbed-node-1 2026-03-31 04:06:15.644226 | orchestrator | 941c3e176734 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-1 2026-03-31 04:06:15.644237 | orchestrator | 1ea1d727f3e0 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-1 2026-03-31 04:06:15.644248 | orchestrator | 005a438656c6 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-03-31 04:06:15.644259 | orchestrator | 8f933ada44a9 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-03-31 04:06:15.644278 | orchestrator | 85fb425855c0 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-03-31 04:06:15.644289 | orchestrator | 1bfbc7ff422f registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-03-31 04:06:15.644300 | orchestrator | c59677ef1226 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-03-31 04:06:15.644311 | orchestrator | bd465713a1bb registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-03-31 04:06:15.644322 | orchestrator | 19470b52be0d registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-03-31 04:06:15.644339 | orchestrator | ea0745212f65 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-03-31 04:06:15.644350 | orchestrator | 3358c16a5bab registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-03-31 04:06:15.644361 | orchestrator | a2202dd35ca8 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-03-31 04:06:15.644371 | orchestrator | 4b1f409a53d0 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-03-31 04:06:15.644382 | orchestrator | 1ade1d78f5a5 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-03-31 04:06:15.644393 | orchestrator | 09249b104873 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch 2026-03-31 04:06:15.644418 | orchestrator | ffc7a2767741 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-03-31 04:06:15.644429 | orchestrator | 0494702e233e registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-03-31 04:06:15.644440 | orchestrator | 8eafd5d9ca97 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-03-31 04:06:15.644451 | orchestrator | 3f96728c98e1 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-03-31 04:06:15.644467 | orchestrator | 2562b9234520 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-03-31 04:06:15.644478 | orchestrator | 0a97b2b45dfa registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-03-31 04:06:16.001122 | orchestrator | 2026-03-31 04:06:16.001227 | orchestrator | ## Images @ testbed-node-1 2026-03-31 04:06:16.001251 | orchestrator | 2026-03-31 04:06:16.001268 | orchestrator | + echo 2026-03-31 04:06:16.001283 | orchestrator | + echo '## Images @ testbed-node-1' 2026-03-31 04:06:16.001300 | orchestrator | + echo 2026-03-31 04:06:16.001317 | orchestrator | + osism container testbed-node-1 images 2026-03-31 04:06:18.444877 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-31 04:06:18.444959 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 4 months ago 322MB 2026-03-31 04:06:18.444969 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 4 months ago 266MB 2026-03-31 04:06:18.444976 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 4 months ago 1.56GB 2026-03-31 04:06:18.444983 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 4 months ago 276MB 2026-03-31 04:06:18.444990 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 4 months ago 1.53GB 2026-03-31 04:06:18.445015 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-03-31 04:06:18.445022 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-03-31 04:06:18.445028 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 4 months ago 1.02GB 2026-03-31 04:06:18.445035 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 4 months ago 412MB 2026-03-31 04:06:18.445041 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 4 months ago 274MB 2026-03-31 04:06:18.445047 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-03-31 04:06:18.445053 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 4 months ago 273MB 2026-03-31 04:06:18.445059 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 4 months ago 273MB 2026-03-31 04:06:18.445065 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 4 months ago 452MB 2026-03-31 04:06:18.445071 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 4 months ago 1.15GB 2026-03-31 04:06:18.445078 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 4 months ago 301MB 2026-03-31 04:06:18.445084 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 4 months ago 298MB 2026-03-31 04:06:18.445090 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-03-31 04:06:18.445096 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 4 months ago 292MB 2026-03-31 04:06:18.445102 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-03-31 04:06:18.445108 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 4 months ago 279MB 2026-03-31 04:06:18.445114 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 4 months ago 279MB 2026-03-31 04:06:18.445120 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 4 months ago 975MB 2026-03-31 04:06:18.445127 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 4 months ago 1.37GB 2026-03-31 04:06:18.445133 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 4 months ago 1.21GB 2026-03-31 04:06:18.445139 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 4 months ago 1.21GB 2026-03-31 04:06:18.445146 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 4 months ago 1.21GB 2026-03-31 04:06:18.445152 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 4 months ago 976MB 2026-03-31 04:06:18.445158 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 4 months ago 976MB 2026-03-31 04:06:18.445183 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 4 months ago 1.13GB 2026-03-31 04:06:18.445203 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 4 months ago 1.24GB 2026-03-31 04:06:18.445229 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 4 months ago 1.22GB 2026-03-31 04:06:18.445247 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 4 months ago 1.06GB 2026-03-31 04:06:18.445256 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 4 months ago 1.05GB 2026-03-31 04:06:18.445266 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 4 months ago 1.05GB 2026-03-31 04:06:18.445275 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 4 months ago 974MB 2026-03-31 04:06:18.445285 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 4 months ago 974MB 2026-03-31 04:06:18.445295 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 4 months ago 974MB 2026-03-31 04:06:18.445304 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 4 months ago 973MB 2026-03-31 04:06:18.445330 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 4 months ago 991MB 2026-03-31 04:06:18.445340 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 4 months ago 991MB 2026-03-31 04:06:18.445350 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 4 months ago 990MB 2026-03-31 04:06:18.445359 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 4 months ago 1.09GB 2026-03-31 04:06:18.445368 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 4 months ago 1.04GB 2026-03-31 04:06:18.445378 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 4 months ago 1.04GB 2026-03-31 04:06:18.445388 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 4 months ago 1.03GB 2026-03-31 04:06:18.445398 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 4 months ago 1.03GB 2026-03-31 04:06:18.445408 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 4 months ago 1.05GB 2026-03-31 04:06:18.445419 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 4 months ago 1.03GB 2026-03-31 04:06:18.445429 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 4 months ago 1.05GB 2026-03-31 04:06:18.445438 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 4 months ago 1.16GB 2026-03-31 04:06:18.445447 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 4 months ago 1.1GB 2026-03-31 04:06:18.445457 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 4 months ago 983MB 2026-03-31 04:06:18.445466 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 4 months ago 989MB 2026-03-31 04:06:18.445475 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 4 months ago 984MB 2026-03-31 04:06:18.445484 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 4 months ago 984MB 2026-03-31 04:06:18.445494 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 4 months ago 989MB 2026-03-31 04:06:18.445504 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 4 months ago 984MB 2026-03-31 04:06:18.445521 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 4 months ago 1.05GB 2026-03-31 04:06:18.445537 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 4 months ago 990MB 2026-03-31 04:06:18.445547 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 4 months ago 1.72GB 2026-03-31 04:06:18.445557 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 4 months ago 1.4GB 2026-03-31 04:06:18.445567 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 4 months ago 1.41GB 2026-03-31 04:06:18.445637 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 4 months ago 1.4GB 2026-03-31 04:06:18.445649 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 4 months ago 840MB 2026-03-31 04:06:18.445660 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 4 months ago 840MB 2026-03-31 04:06:18.445670 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 4 months ago 840MB 2026-03-31 04:06:18.445680 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 4 months ago 840MB 2026-03-31 04:06:18.445691 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 10 months ago 1.27GB 2026-03-31 04:06:18.819153 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-31 04:06:18.819744 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-31 04:06:18.894935 | orchestrator | 2026-03-31 04:06:18.895017 | orchestrator | ## Containers @ testbed-node-2 2026-03-31 04:06:18.895027 | orchestrator | 2026-03-31 04:06:18.895036 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-31 04:06:18.895045 | orchestrator | + echo 2026-03-31 04:06:18.895053 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-03-31 04:06:18.895061 | orchestrator | + echo 2026-03-31 04:06:18.895069 | orchestrator | + osism container testbed-node-2 ps 2026-03-31 04:06:21.751902 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-31 04:06:21.751978 | orchestrator | b390fa7f27b7 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-03-31 04:06:21.751986 | orchestrator | b98a3f6c8871 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-03-31 04:06:21.751991 | orchestrator | 4bfb0cba3458 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-03-31 04:06:21.751995 | orchestrator | 3e6274b0c937 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-03-31 04:06:21.752001 | orchestrator | 4e42115d4f63 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-03-31 04:06:21.752005 | orchestrator | 55df4233ded5 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-03-31 04:06:21.752009 | orchestrator | fb12f5d43a5b registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-03-31 04:06:21.752029 | orchestrator | 0b0023c7c7c6 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-03-31 04:06:21.752034 | orchestrator | e1bfca226dc4 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_share 2026-03-31 04:06:21.752038 | orchestrator | 7227fb9237b2 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_scheduler 2026-03-31 04:06:21.752042 | orchestrator | eb782696c0b5 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-03-31 04:06:21.752045 | orchestrator | a76507f1fabb registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-03-31 04:06:21.752049 | orchestrator | c6d427bc7b21 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-03-31 04:06:21.752053 | orchestrator | 0a09eba605fe registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-03-31 04:06:21.752068 | orchestrator | 5f872947075a registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-03-31 04:06:21.752072 | orchestrator | 07ad19e2f1d3 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-03-31 04:06:21.752076 | orchestrator | 0d5c74227463 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-03-31 04:06:21.752079 | orchestrator | f23dc3d17deb registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-03-31 04:06:21.752083 | orchestrator | 6bda12c54848 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_worker 2026-03-31 04:06:21.752097 | orchestrator | 99a0cc2b1be5 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_housekeeping 2026-03-31 04:06:21.752101 | orchestrator | e98eeeebdca3 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-03-31 04:06:21.752105 | orchestrator | 48f854d62da2 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-03-31 04:06:21.752109 | orchestrator | b887950010d0 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-03-31 04:06:21.752112 | orchestrator | fc0fa5258fe1 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_worker 2026-03-31 04:06:21.752116 | orchestrator | 515545ce4069 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_mdns 2026-03-31 04:06:21.752124 | orchestrator | 1c08d3ac9748 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-03-31 04:06:21.752128 | orchestrator | e4038b517f08 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-03-31 04:06:21.752132 | orchestrator | 26f019e1b851 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-03-31 04:06:21.752136 | orchestrator | 9be4d5442268 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-03-31 04:06:21.752139 | orchestrator | ac368439d3eb registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-03-31 04:06:21.752143 | orchestrator | d6ed5f5e401e registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-03-31 04:06:21.752147 | orchestrator | 77549c69d78d registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-03-31 04:06:21.752153 | orchestrator | 2c824df1483b registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_backup 2026-03-31 04:06:21.752157 | orchestrator | 77af9aabfebe registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_volume 2026-03-31 04:06:21.752161 | orchestrator | 91ab46ff3a78 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_scheduler 2026-03-31 04:06:21.752165 | orchestrator | 97ca27248436 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_api 2026-03-31 04:06:21.752168 | orchestrator | 22b2e631ec30 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) glance_api 2026-03-31 04:06:21.752172 | orchestrator | 9781b3f3418c registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_console 2026-03-31 04:06:21.752176 | orchestrator | 61715dfc3d68 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_apiserver 2026-03-31 04:06:21.752184 | orchestrator | bf21538e54b7 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) horizon 2026-03-31 04:06:21.752188 | orchestrator | ecfc547b2af0 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_novncproxy 2026-03-31 04:06:21.752191 | orchestrator | 03b1a38b5937 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_conductor 2026-03-31 04:06:21.752198 | orchestrator | e35ef811bd88 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_api 2026-03-31 04:06:21.752202 | orchestrator | 7589522615c3 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_scheduler 2026-03-31 04:06:21.752206 | orchestrator | 99a91c1200bc registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 49 minutes ago Up 49 minutes (healthy) neutron_server 2026-03-31 04:06:21.752209 | orchestrator | 28fa69c05ec0 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) placement_api 2026-03-31 04:06:21.752213 | orchestrator | 0fc6cccc009c registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone 2026-03-31 04:06:21.752217 | orchestrator | 683f048a53a9 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone_fernet 2026-03-31 04:06:21.752221 | orchestrator | b32c595a9f97 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone_ssh 2026-03-31 04:06:21.752224 | orchestrator | 0cfcdd0a0078 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 58 minutes ago Up 58 minutes ceph-mgr-testbed-node-2 2026-03-31 04:06:21.752228 | orchestrator | 205cfc875b91 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-2 2026-03-31 04:06:21.752232 | orchestrator | df3f30930c20 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-2 2026-03-31 04:06:21.752236 | orchestrator | d713d22392bc registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-03-31 04:06:21.752239 | orchestrator | 900971101dc5 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-03-31 04:06:21.752245 | orchestrator | 247049e54c76 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-03-31 04:06:21.752249 | orchestrator | fd4c813f681d registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-03-31 04:06:21.752253 | orchestrator | 334779d37247 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-03-31 04:06:21.752257 | orchestrator | 3d70c707e690 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-03-31 04:06:21.752261 | orchestrator | 02d2a4f6a1e3 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-03-31 04:06:21.752267 | orchestrator | c01182f50b83 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-03-31 04:06:21.752274 | orchestrator | 1e909ddb41fa registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-03-31 04:06:21.752278 | orchestrator | 6607cd6faccb registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-03-31 04:06:21.752281 | orchestrator | a9db1de880a4 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-03-31 04:06:21.752285 | orchestrator | 3d9663b74250 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-03-31 04:06:21.752289 | orchestrator | fec98ea0202e registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch 2026-03-31 04:06:21.752293 | orchestrator | 2358614a1a09 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-03-31 04:06:21.752297 | orchestrator | 98a251e62993 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-03-31 04:06:21.752300 | orchestrator | e29a1a11293e registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-03-31 04:06:21.752304 | orchestrator | 811af31f2e4c registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-03-31 04:06:21.752308 | orchestrator | 75b0be477b14 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-03-31 04:06:21.752312 | orchestrator | 008dbdb3f0ae registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-03-31 04:06:22.157373 | orchestrator | 2026-03-31 04:06:22.157462 | orchestrator | ## Images @ testbed-node-2 2026-03-31 04:06:22.157477 | orchestrator | 2026-03-31 04:06:22.157487 | orchestrator | + echo 2026-03-31 04:06:22.157498 | orchestrator | + echo '## Images @ testbed-node-2' 2026-03-31 04:06:22.157508 | orchestrator | + echo 2026-03-31 04:06:22.157517 | orchestrator | + osism container testbed-node-2 images 2026-03-31 04:06:24.652352 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-31 04:06:24.652457 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 4 months ago 322MB 2026-03-31 04:06:24.652490 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 4 months ago 266MB 2026-03-31 04:06:24.652503 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 4 months ago 1.56GB 2026-03-31 04:06:24.652513 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 4 months ago 1.53GB 2026-03-31 04:06:24.652523 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 4 months ago 276MB 2026-03-31 04:06:24.652533 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-03-31 04:06:24.652542 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-03-31 04:06:24.652626 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 4 months ago 1.02GB 2026-03-31 04:06:24.652639 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 4 months ago 412MB 2026-03-31 04:06:24.652650 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 4 months ago 274MB 2026-03-31 04:06:24.652664 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-03-31 04:06:24.652675 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 4 months ago 273MB 2026-03-31 04:06:24.652687 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 4 months ago 273MB 2026-03-31 04:06:24.652699 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 4 months ago 452MB 2026-03-31 04:06:24.652710 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 4 months ago 1.15GB 2026-03-31 04:06:24.652720 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 4 months ago 301MB 2026-03-31 04:06:24.652731 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 4 months ago 298MB 2026-03-31 04:06:24.652742 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-03-31 04:06:24.652752 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 4 months ago 292MB 2026-03-31 04:06:24.652762 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-03-31 04:06:24.652773 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 4 months ago 279MB 2026-03-31 04:06:24.652785 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 4 months ago 975MB 2026-03-31 04:06:24.652795 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 4 months ago 279MB 2026-03-31 04:06:24.652806 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 4 months ago 1.37GB 2026-03-31 04:06:24.652816 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 4 months ago 1.21GB 2026-03-31 04:06:24.652827 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 4 months ago 1.21GB 2026-03-31 04:06:24.652838 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 4 months ago 1.21GB 2026-03-31 04:06:24.652848 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 4 months ago 976MB 2026-03-31 04:06:24.652857 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 4 months ago 976MB 2026-03-31 04:06:24.652868 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 4 months ago 1.13GB 2026-03-31 04:06:24.652879 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 4 months ago 1.24GB 2026-03-31 04:06:24.652912 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 4 months ago 1.22GB 2026-03-31 04:06:24.652923 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 4 months ago 1.06GB 2026-03-31 04:06:24.652932 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 4 months ago 1.05GB 2026-03-31 04:06:24.652953 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 4 months ago 1.05GB 2026-03-31 04:06:24.652963 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 4 months ago 974MB 2026-03-31 04:06:24.652973 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 4 months ago 974MB 2026-03-31 04:06:24.652983 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 4 months ago 974MB 2026-03-31 04:06:24.652992 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 4 months ago 973MB 2026-03-31 04:06:24.653002 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 4 months ago 991MB 2026-03-31 04:06:24.653011 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 4 months ago 991MB 2026-03-31 04:06:24.653021 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 4 months ago 990MB 2026-03-31 04:06:24.653031 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 4 months ago 1.09GB 2026-03-31 04:06:24.653053 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 4 months ago 1.04GB 2026-03-31 04:06:24.653068 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 4 months ago 1.04GB 2026-03-31 04:06:24.653080 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 4 months ago 1.03GB 2026-03-31 04:06:24.653091 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 4 months ago 1.03GB 2026-03-31 04:06:24.653103 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 4 months ago 1.05GB 2026-03-31 04:06:24.653113 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 4 months ago 1.03GB 2026-03-31 04:06:24.653125 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 4 months ago 1.05GB 2026-03-31 04:06:24.653135 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 4 months ago 1.16GB 2026-03-31 04:06:24.653147 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 4 months ago 1.1GB 2026-03-31 04:06:24.653157 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 4 months ago 983MB 2026-03-31 04:06:24.653167 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 4 months ago 989MB 2026-03-31 04:06:24.653177 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 4 months ago 984MB 2026-03-31 04:06:24.653186 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 4 months ago 984MB 2026-03-31 04:06:24.653196 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 4 months ago 989MB 2026-03-31 04:06:24.653206 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 4 months ago 984MB 2026-03-31 04:06:24.653215 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 4 months ago 1.05GB 2026-03-31 04:06:24.653225 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 4 months ago 990MB 2026-03-31 04:06:24.653241 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 4 months ago 1.72GB 2026-03-31 04:06:24.653250 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 4 months ago 1.4GB 2026-03-31 04:06:24.653260 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 4 months ago 1.41GB 2026-03-31 04:06:24.653277 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 4 months ago 1.4GB 2026-03-31 04:06:24.653286 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 4 months ago 840MB 2026-03-31 04:06:24.653296 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 4 months ago 840MB 2026-03-31 04:06:24.653311 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 4 months ago 840MB 2026-03-31 04:06:24.653321 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 4 months ago 840MB 2026-03-31 04:06:24.653330 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 10 months ago 1.27GB 2026-03-31 04:06:25.068268 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-03-31 04:06:25.079187 | orchestrator | + set -e 2026-03-31 04:06:25.080045 | orchestrator | + source /opt/manager-vars.sh 2026-03-31 04:06:25.080077 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-31 04:06:25.080084 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-31 04:06:25.080092 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-31 04:06:25.080098 | orchestrator | ++ CEPH_VERSION=reef 2026-03-31 04:06:25.080105 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-31 04:06:25.080113 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-31 04:06:25.080120 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-31 04:06:25.080127 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-31 04:06:25.080134 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-31 04:06:25.080140 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-31 04:06:25.080147 | orchestrator | ++ export ARA=false 2026-03-31 04:06:25.080153 | orchestrator | ++ ARA=false 2026-03-31 04:06:25.080160 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-31 04:06:25.080167 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-31 04:06:25.080174 | orchestrator | ++ export TEMPEST=false 2026-03-31 04:06:25.080180 | orchestrator | ++ TEMPEST=false 2026-03-31 04:06:25.080187 | orchestrator | ++ export IS_ZUUL=true 2026-03-31 04:06:25.080193 | orchestrator | ++ IS_ZUUL=true 2026-03-31 04:06:25.080200 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.240 2026-03-31 04:06:25.080207 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.240 2026-03-31 04:06:25.080214 | orchestrator | ++ export EXTERNAL_API=false 2026-03-31 04:06:25.080220 | orchestrator | ++ EXTERNAL_API=false 2026-03-31 04:06:25.080227 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-31 04:06:25.080233 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-31 04:06:25.080241 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-31 04:06:25.080247 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-31 04:06:25.080254 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-31 04:06:25.080260 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-31 04:06:25.080267 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-31 04:06:25.080274 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-03-31 04:06:25.087878 | orchestrator | + set -e 2026-03-31 04:06:25.087962 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-31 04:06:25.087978 | orchestrator | ++ export INTERACTIVE=false 2026-03-31 04:06:25.087991 | orchestrator | ++ INTERACTIVE=false 2026-03-31 04:06:25.088003 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-31 04:06:25.088013 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-31 04:06:25.088025 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-31 04:06:25.088828 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-31 04:06:25.092971 | orchestrator | 2026-03-31 04:06:25.093029 | orchestrator | # Ceph status 2026-03-31 04:06:25.093041 | orchestrator | 2026-03-31 04:06:25.093080 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-31 04:06:25.093093 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-31 04:06:25.093103 | orchestrator | + echo 2026-03-31 04:06:25.093113 | orchestrator | + echo '# Ceph status' 2026-03-31 04:06:25.093123 | orchestrator | + echo 2026-03-31 04:06:25.093133 | orchestrator | + ceph -s 2026-03-31 04:06:25.740512 | orchestrator | cluster: 2026-03-31 04:06:25.740670 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-03-31 04:06:25.740685 | orchestrator | health: HEALTH_OK 2026-03-31 04:06:25.740694 | orchestrator | 2026-03-31 04:06:25.740703 | orchestrator | services: 2026-03-31 04:06:25.740711 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 70m) 2026-03-31 04:06:25.740731 | orchestrator | mgr: testbed-node-1(active, since 57m), standbys: testbed-node-2, testbed-node-0 2026-03-31 04:06:25.740740 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-03-31 04:06:25.740749 | orchestrator | osd: 6 osds: 6 up (since 67m), 6 in (since 67m) 2026-03-31 04:06:25.740757 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-03-31 04:06:25.740765 | orchestrator | 2026-03-31 04:06:25.740773 | orchestrator | data: 2026-03-31 04:06:25.740781 | orchestrator | volumes: 1/1 healthy 2026-03-31 04:06:25.740789 | orchestrator | pools: 14 pools, 401 pgs 2026-03-31 04:06:25.740797 | orchestrator | objects: 555 objects, 2.2 GiB 2026-03-31 04:06:25.740806 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-03-31 04:06:25.740814 | orchestrator | pgs: 401 active+clean 2026-03-31 04:06:25.740822 | orchestrator | 2026-03-31 04:06:25.789053 | orchestrator | + echo 2026-03-31 04:06:25.789914 | orchestrator | 2026-03-31 04:06:25.789988 | orchestrator | # Ceph versions 2026-03-31 04:06:25.790003 | orchestrator | 2026-03-31 04:06:25.790063 | orchestrator | + echo '# Ceph versions' 2026-03-31 04:06:25.790076 | orchestrator | + echo 2026-03-31 04:06:25.790086 | orchestrator | + ceph versions 2026-03-31 04:06:26.501272 | orchestrator | { 2026-03-31 04:06:26.501375 | orchestrator | "mon": { 2026-03-31 04:06:26.501390 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-31 04:06:26.501401 | orchestrator | }, 2026-03-31 04:06:26.501412 | orchestrator | "mgr": { 2026-03-31 04:06:26.501422 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-31 04:06:26.501431 | orchestrator | }, 2026-03-31 04:06:26.501441 | orchestrator | "osd": { 2026-03-31 04:06:26.501451 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-03-31 04:06:26.501461 | orchestrator | }, 2026-03-31 04:06:26.501470 | orchestrator | "mds": { 2026-03-31 04:06:26.501480 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-31 04:06:26.501494 | orchestrator | }, 2026-03-31 04:06:26.501514 | orchestrator | "rgw": { 2026-03-31 04:06:26.501537 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-31 04:06:26.501552 | orchestrator | }, 2026-03-31 04:06:26.501654 | orchestrator | "overall": { 2026-03-31 04:06:26.501675 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-03-31 04:06:26.501693 | orchestrator | } 2026-03-31 04:06:26.501711 | orchestrator | } 2026-03-31 04:06:26.561045 | orchestrator | 2026-03-31 04:06:26.561145 | orchestrator | # Ceph OSD tree 2026-03-31 04:06:26.561158 | orchestrator | 2026-03-31 04:06:26.561168 | orchestrator | + echo 2026-03-31 04:06:26.561177 | orchestrator | + echo '# Ceph OSD tree' 2026-03-31 04:06:26.561187 | orchestrator | + echo 2026-03-31 04:06:26.561197 | orchestrator | + ceph osd df tree 2026-03-31 04:06:27.144843 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-03-31 04:06:27.144998 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 390 MiB 113 GiB 5.88 1.00 - root default 2026-03-31 04:06:27.145025 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.90 1.00 - host testbed-node-3 2026-03-31 04:06:27.145045 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1004 MiB 931 MiB 1 KiB 74 MiB 19 GiB 4.91 0.83 196 up osd.2 2026-03-31 04:06:27.145934 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 66 MiB 19 GiB 6.90 1.17 194 up osd.3 2026-03-31 04:06:27.145990 | orchestrator | -7 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 123 MiB 38 GiB 5.87 1.00 - host testbed-node-4 2026-03-31 04:06:27.146111 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 62 MiB 19 GiB 5.94 1.01 192 up osd.1 2026-03-31 04:06:27.146132 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 62 MiB 19 GiB 5.79 0.98 196 up osd.4 2026-03-31 04:06:27.146149 | orchestrator | -5 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 127 MiB 38 GiB 5.88 1.00 - host testbed-node-5 2026-03-31 04:06:27.146165 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 62 MiB 19 GiB 6.26 1.06 199 up osd.0 2026-03-31 04:06:27.146182 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 66 MiB 19 GiB 5.50 0.93 193 up osd.5 2026-03-31 04:06:27.146192 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 390 MiB 113 GiB 5.88 2026-03-31 04:06:27.146203 | orchestrator | MIN/MAX VAR: 0.83/1.17 STDDEV: 0.62 2026-03-31 04:06:27.201791 | orchestrator | 2026-03-31 04:06:27.201918 | orchestrator | # Ceph monitor status 2026-03-31 04:06:27.201945 | orchestrator | 2026-03-31 04:06:27.201964 | orchestrator | + echo 2026-03-31 04:06:27.201983 | orchestrator | + echo '# Ceph monitor status' 2026-03-31 04:06:27.202001 | orchestrator | + echo 2026-03-31 04:06:27.202088 | orchestrator | + ceph mon stat 2026-03-31 04:06:27.827829 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 6, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-03-31 04:06:27.888269 | orchestrator | 2026-03-31 04:06:27.888373 | orchestrator | # Ceph quorum status 2026-03-31 04:06:27.888388 | orchestrator | 2026-03-31 04:06:27.888400 | orchestrator | + echo 2026-03-31 04:06:27.888412 | orchestrator | + echo '# Ceph quorum status' 2026-03-31 04:06:27.888423 | orchestrator | + echo 2026-03-31 04:06:27.889326 | orchestrator | + ceph quorum_status 2026-03-31 04:06:27.889393 | orchestrator | + jq 2026-03-31 04:06:28.609030 | orchestrator | { 2026-03-31 04:06:28.609297 | orchestrator | "election_epoch": 6, 2026-03-31 04:06:28.609333 | orchestrator | "quorum": [ 2026-03-31 04:06:28.609355 | orchestrator | 0, 2026-03-31 04:06:28.609367 | orchestrator | 1, 2026-03-31 04:06:28.609378 | orchestrator | 2 2026-03-31 04:06:28.609389 | orchestrator | ], 2026-03-31 04:06:28.609400 | orchestrator | "quorum_names": [ 2026-03-31 04:06:28.609411 | orchestrator | "testbed-node-0", 2026-03-31 04:06:28.609422 | orchestrator | "testbed-node-1", 2026-03-31 04:06:28.609432 | orchestrator | "testbed-node-2" 2026-03-31 04:06:28.609443 | orchestrator | ], 2026-03-31 04:06:28.609455 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-03-31 04:06:28.609467 | orchestrator | "quorum_age": 4262, 2026-03-31 04:06:28.609478 | orchestrator | "features": { 2026-03-31 04:06:28.609489 | orchestrator | "quorum_con": "4540138322906710015", 2026-03-31 04:06:28.609500 | orchestrator | "quorum_mon": [ 2026-03-31 04:06:28.609510 | orchestrator | "kraken", 2026-03-31 04:06:28.609521 | orchestrator | "luminous", 2026-03-31 04:06:28.609532 | orchestrator | "mimic", 2026-03-31 04:06:28.609543 | orchestrator | "osdmap-prune", 2026-03-31 04:06:28.609553 | orchestrator | "nautilus", 2026-03-31 04:06:28.609597 | orchestrator | "octopus", 2026-03-31 04:06:28.609609 | orchestrator | "pacific", 2026-03-31 04:06:28.609620 | orchestrator | "elector-pinging", 2026-03-31 04:06:28.609631 | orchestrator | "quincy", 2026-03-31 04:06:28.609642 | orchestrator | "reef" 2026-03-31 04:06:28.609653 | orchestrator | ] 2026-03-31 04:06:28.609664 | orchestrator | }, 2026-03-31 04:06:28.609675 | orchestrator | "monmap": { 2026-03-31 04:06:28.609686 | orchestrator | "epoch": 1, 2026-03-31 04:06:28.609697 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-03-31 04:06:28.609709 | orchestrator | "modified": "2026-03-31T02:55:03.670616Z", 2026-03-31 04:06:28.609720 | orchestrator | "created": "2026-03-31T02:55:03.670616Z", 2026-03-31 04:06:28.609731 | orchestrator | "min_mon_release": 18, 2026-03-31 04:06:28.609742 | orchestrator | "min_mon_release_name": "reef", 2026-03-31 04:06:28.609753 | orchestrator | "election_strategy": 1, 2026-03-31 04:06:28.609764 | orchestrator | "disallowed_leaders: ": "", 2026-03-31 04:06:28.609775 | orchestrator | "stretch_mode": false, 2026-03-31 04:06:28.609786 | orchestrator | "tiebreaker_mon": "", 2026-03-31 04:06:28.609823 | orchestrator | "removed_ranks: ": "", 2026-03-31 04:06:28.609837 | orchestrator | "features": { 2026-03-31 04:06:28.609848 | orchestrator | "persistent": [ 2026-03-31 04:06:28.609861 | orchestrator | "kraken", 2026-03-31 04:06:28.609873 | orchestrator | "luminous", 2026-03-31 04:06:28.609885 | orchestrator | "mimic", 2026-03-31 04:06:28.609897 | orchestrator | "osdmap-prune", 2026-03-31 04:06:28.609909 | orchestrator | "nautilus", 2026-03-31 04:06:28.609921 | orchestrator | "octopus", 2026-03-31 04:06:28.609934 | orchestrator | "pacific", 2026-03-31 04:06:28.609946 | orchestrator | "elector-pinging", 2026-03-31 04:06:28.609958 | orchestrator | "quincy", 2026-03-31 04:06:28.609970 | orchestrator | "reef" 2026-03-31 04:06:28.609982 | orchestrator | ], 2026-03-31 04:06:28.609994 | orchestrator | "optional": [] 2026-03-31 04:06:28.610006 | orchestrator | }, 2026-03-31 04:06:28.610080 | orchestrator | "mons": [ 2026-03-31 04:06:28.610093 | orchestrator | { 2026-03-31 04:06:28.610106 | orchestrator | "rank": 0, 2026-03-31 04:06:28.610120 | orchestrator | "name": "testbed-node-0", 2026-03-31 04:06:28.610133 | orchestrator | "public_addrs": { 2026-03-31 04:06:28.610145 | orchestrator | "addrvec": [ 2026-03-31 04:06:28.610157 | orchestrator | { 2026-03-31 04:06:28.610169 | orchestrator | "type": "v2", 2026-03-31 04:06:28.610182 | orchestrator | "addr": "192.168.16.10:3300", 2026-03-31 04:06:28.610196 | orchestrator | "nonce": 0 2026-03-31 04:06:28.610208 | orchestrator | }, 2026-03-31 04:06:28.610220 | orchestrator | { 2026-03-31 04:06:28.610231 | orchestrator | "type": "v1", 2026-03-31 04:06:28.610242 | orchestrator | "addr": "192.168.16.10:6789", 2026-03-31 04:06:28.610252 | orchestrator | "nonce": 0 2026-03-31 04:06:28.610263 | orchestrator | } 2026-03-31 04:06:28.610274 | orchestrator | ] 2026-03-31 04:06:28.610285 | orchestrator | }, 2026-03-31 04:06:28.610296 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-03-31 04:06:28.610307 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-03-31 04:06:28.610318 | orchestrator | "priority": 0, 2026-03-31 04:06:28.610328 | orchestrator | "weight": 0, 2026-03-31 04:06:28.610339 | orchestrator | "crush_location": "{}" 2026-03-31 04:06:28.610350 | orchestrator | }, 2026-03-31 04:06:28.610361 | orchestrator | { 2026-03-31 04:06:28.610372 | orchestrator | "rank": 1, 2026-03-31 04:06:28.610382 | orchestrator | "name": "testbed-node-1", 2026-03-31 04:06:28.610393 | orchestrator | "public_addrs": { 2026-03-31 04:06:28.610404 | orchestrator | "addrvec": [ 2026-03-31 04:06:28.610415 | orchestrator | { 2026-03-31 04:06:28.610426 | orchestrator | "type": "v2", 2026-03-31 04:06:28.610456 | orchestrator | "addr": "192.168.16.11:3300", 2026-03-31 04:06:28.610467 | orchestrator | "nonce": 0 2026-03-31 04:06:28.610478 | orchestrator | }, 2026-03-31 04:06:28.610489 | orchestrator | { 2026-03-31 04:06:28.610500 | orchestrator | "type": "v1", 2026-03-31 04:06:28.610510 | orchestrator | "addr": "192.168.16.11:6789", 2026-03-31 04:06:28.610521 | orchestrator | "nonce": 0 2026-03-31 04:06:28.610532 | orchestrator | } 2026-03-31 04:06:28.610543 | orchestrator | ] 2026-03-31 04:06:28.610554 | orchestrator | }, 2026-03-31 04:06:28.610605 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-03-31 04:06:28.610617 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-03-31 04:06:28.610628 | orchestrator | "priority": 0, 2026-03-31 04:06:28.610638 | orchestrator | "weight": 0, 2026-03-31 04:06:28.610649 | orchestrator | "crush_location": "{}" 2026-03-31 04:06:28.610660 | orchestrator | }, 2026-03-31 04:06:28.610671 | orchestrator | { 2026-03-31 04:06:28.610682 | orchestrator | "rank": 2, 2026-03-31 04:06:28.610692 | orchestrator | "name": "testbed-node-2", 2026-03-31 04:06:28.610703 | orchestrator | "public_addrs": { 2026-03-31 04:06:28.610714 | orchestrator | "addrvec": [ 2026-03-31 04:06:28.610725 | orchestrator | { 2026-03-31 04:06:28.610735 | orchestrator | "type": "v2", 2026-03-31 04:06:28.610746 | orchestrator | "addr": "192.168.16.12:3300", 2026-03-31 04:06:28.610757 | orchestrator | "nonce": 0 2026-03-31 04:06:28.610768 | orchestrator | }, 2026-03-31 04:06:28.610778 | orchestrator | { 2026-03-31 04:06:28.610789 | orchestrator | "type": "v1", 2026-03-31 04:06:28.610800 | orchestrator | "addr": "192.168.16.12:6789", 2026-03-31 04:06:28.610811 | orchestrator | "nonce": 0 2026-03-31 04:06:28.610822 | orchestrator | } 2026-03-31 04:06:28.610833 | orchestrator | ] 2026-03-31 04:06:28.610853 | orchestrator | }, 2026-03-31 04:06:28.610864 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-03-31 04:06:28.610875 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-03-31 04:06:28.610886 | orchestrator | "priority": 0, 2026-03-31 04:06:28.610897 | orchestrator | "weight": 0, 2026-03-31 04:06:28.610907 | orchestrator | "crush_location": "{}" 2026-03-31 04:06:28.610918 | orchestrator | } 2026-03-31 04:06:28.610929 | orchestrator | ] 2026-03-31 04:06:28.610940 | orchestrator | } 2026-03-31 04:06:28.610954 | orchestrator | } 2026-03-31 04:06:28.610992 | orchestrator | 2026-03-31 04:06:28.611020 | orchestrator | # Ceph free space status 2026-03-31 04:06:28.611036 | orchestrator | 2026-03-31 04:06:28.611053 | orchestrator | + echo 2026-03-31 04:06:28.611069 | orchestrator | + echo '# Ceph free space status' 2026-03-31 04:06:28.611087 | orchestrator | + echo 2026-03-31 04:06:28.611104 | orchestrator | + ceph df 2026-03-31 04:06:29.243030 | orchestrator | --- RAW STORAGE --- 2026-03-31 04:06:29.243115 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-03-31 04:06:29.243142 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.88 2026-03-31 04:06:29.243153 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.88 2026-03-31 04:06:29.243164 | orchestrator | 2026-03-31 04:06:29.243175 | orchestrator | --- POOLS --- 2026-03-31 04:06:29.243187 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-03-31 04:06:29.243200 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2026-03-31 04:06:29.243211 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-03-31 04:06:29.243222 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-03-31 04:06:29.243233 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-03-31 04:06:29.243243 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-03-31 04:06:29.243254 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-03-31 04:06:29.243264 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-03-31 04:06:29.243274 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-03-31 04:06:29.243283 | orchestrator | .rgw.root 9 32 3.5 KiB 7 56 KiB 0 53 GiB 2026-03-31 04:06:29.243293 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-03-31 04:06:29.243304 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-03-31 04:06:29.243313 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.93 35 GiB 2026-03-31 04:06:29.243323 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-03-31 04:06:29.243333 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-03-31 04:06:29.301971 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-31 04:06:29.377448 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-31 04:06:29.377527 | orchestrator | + osism apply facts 2026-03-31 04:06:31.752087 | orchestrator | 2026-03-31 04:06:31 | INFO  | Task 8f2acee5-01d5-4463-b502-2f2e79caa6cf (facts) was prepared for execution. 2026-03-31 04:06:31.752184 | orchestrator | 2026-03-31 04:06:31 | INFO  | It takes a moment until task 8f2acee5-01d5-4463-b502-2f2e79caa6cf (facts) has been started and output is visible here. 2026-03-31 04:06:47.002252 | orchestrator | 2026-03-31 04:06:47.002364 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-31 04:06:47.002375 | orchestrator | 2026-03-31 04:06:47.002381 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-31 04:06:47.002386 | orchestrator | Tuesday 31 March 2026 04:06:36 +0000 (0:00:00.302) 0:00:00.302 ********* 2026-03-31 04:06:47.002391 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:06:47.002397 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:06:47.002402 | orchestrator | ok: [testbed-manager] 2026-03-31 04:06:47.002407 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:06:47.002411 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:06:47.002416 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:06:47.002420 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:06:47.002447 | orchestrator | 2026-03-31 04:06:47.002452 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-31 04:06:47.002457 | orchestrator | Tuesday 31 March 2026 04:06:38 +0000 (0:00:01.360) 0:00:01.663 ********* 2026-03-31 04:06:47.002462 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:06:47.002467 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:06:47.002472 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:06:47.002476 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:06:47.002481 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:06:47.002486 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:06:47.002490 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:06:47.002495 | orchestrator | 2026-03-31 04:06:47.002499 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-31 04:06:47.002504 | orchestrator | 2026-03-31 04:06:47.002508 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-31 04:06:47.002513 | orchestrator | Tuesday 31 March 2026 04:06:39 +0000 (0:00:01.564) 0:00:03.228 ********* 2026-03-31 04:06:47.002517 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:06:47.002522 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:06:47.002527 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:06:47.002531 | orchestrator | ok: [testbed-manager] 2026-03-31 04:06:47.002571 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:06:47.002577 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:06:47.002581 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:06:47.002586 | orchestrator | 2026-03-31 04:06:47.002590 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-31 04:06:47.002595 | orchestrator | 2026-03-31 04:06:47.002599 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-31 04:06:47.002604 | orchestrator | Tuesday 31 March 2026 04:06:45 +0000 (0:00:06.044) 0:00:09.273 ********* 2026-03-31 04:06:47.002609 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:06:47.002614 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:06:47.002618 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:06:47.002623 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:06:47.002627 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:06:47.002632 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:06:47.002636 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:06:47.002641 | orchestrator | 2026-03-31 04:06:47.002645 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 04:06:47.002650 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 04:06:47.002656 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 04:06:47.002672 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 04:06:47.002677 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 04:06:47.002682 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 04:06:47.002686 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 04:06:47.002691 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 04:06:47.002695 | orchestrator | 2026-03-31 04:06:47.002700 | orchestrator | 2026-03-31 04:06:47.002705 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 04:06:47.002709 | orchestrator | Tuesday 31 March 2026 04:06:46 +0000 (0:00:00.678) 0:00:09.951 ********* 2026-03-31 04:06:47.002719 | orchestrator | =============================================================================== 2026-03-31 04:06:47.002724 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.04s 2026-03-31 04:06:47.002728 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.56s 2026-03-31 04:06:47.002733 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.36s 2026-03-31 04:06:47.002737 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.68s 2026-03-31 04:06:47.389985 | orchestrator | + osism validate ceph-mons 2026-03-31 04:07:22.618263 | orchestrator | 2026-03-31 04:07:22.618365 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-03-31 04:07:22.618377 | orchestrator | 2026-03-31 04:07:22.618385 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-31 04:07:22.618393 | orchestrator | Tuesday 31 March 2026 04:07:04 +0000 (0:00:00.465) 0:00:00.465 ********* 2026-03-31 04:07:22.618402 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-31 04:07:22.618410 | orchestrator | 2026-03-31 04:07:22.618417 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-31 04:07:22.618424 | orchestrator | Tuesday 31 March 2026 04:07:05 +0000 (0:00:00.856) 0:00:01.321 ********* 2026-03-31 04:07:22.618432 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-31 04:07:22.618439 | orchestrator | 2026-03-31 04:07:22.618446 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-31 04:07:22.618454 | orchestrator | Tuesday 31 March 2026 04:07:06 +0000 (0:00:01.308) 0:00:02.630 ********* 2026-03-31 04:07:22.618461 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:07:22.618469 | orchestrator | 2026-03-31 04:07:22.618477 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-03-31 04:07:22.618484 | orchestrator | Tuesday 31 March 2026 04:07:07 +0000 (0:00:00.201) 0:00:02.831 ********* 2026-03-31 04:07:22.618521 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:07:22.618530 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:07:22.618537 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:07:22.618544 | orchestrator | 2026-03-31 04:07:22.618551 | orchestrator | TASK [Get container info] ****************************************************** 2026-03-31 04:07:22.618558 | orchestrator | Tuesday 31 March 2026 04:07:07 +0000 (0:00:00.386) 0:00:03.218 ********* 2026-03-31 04:07:22.618565 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:07:22.618573 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:07:22.618580 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:07:22.618587 | orchestrator | 2026-03-31 04:07:22.618594 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-03-31 04:07:22.618601 | orchestrator | Tuesday 31 March 2026 04:07:08 +0000 (0:00:01.100) 0:00:04.318 ********* 2026-03-31 04:07:22.618609 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:07:22.618616 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:07:22.618623 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:07:22.618630 | orchestrator | 2026-03-31 04:07:22.618638 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-03-31 04:07:22.618645 | orchestrator | Tuesday 31 March 2026 04:07:08 +0000 (0:00:00.367) 0:00:04.686 ********* 2026-03-31 04:07:22.618653 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:07:22.618660 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:07:22.618667 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:07:22.618674 | orchestrator | 2026-03-31 04:07:22.618681 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-31 04:07:22.618689 | orchestrator | Tuesday 31 March 2026 04:07:09 +0000 (0:00:00.613) 0:00:05.300 ********* 2026-03-31 04:07:22.618705 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:07:22.618712 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:07:22.618719 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:07:22.618727 | orchestrator | 2026-03-31 04:07:22.618734 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-03-31 04:07:22.618741 | orchestrator | Tuesday 31 March 2026 04:07:09 +0000 (0:00:00.387) 0:00:05.688 ********* 2026-03-31 04:07:22.618774 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:07:22.618782 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:07:22.618789 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:07:22.618796 | orchestrator | 2026-03-31 04:07:22.618803 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-03-31 04:07:22.618811 | orchestrator | Tuesday 31 March 2026 04:07:10 +0000 (0:00:00.398) 0:00:06.086 ********* 2026-03-31 04:07:22.618836 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:07:22.618843 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:07:22.618850 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:07:22.618857 | orchestrator | 2026-03-31 04:07:22.618864 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-31 04:07:22.618872 | orchestrator | Tuesday 31 March 2026 04:07:10 +0000 (0:00:00.684) 0:00:06.771 ********* 2026-03-31 04:07:22.618879 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:07:22.618886 | orchestrator | 2026-03-31 04:07:22.618894 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-31 04:07:22.618901 | orchestrator | Tuesday 31 March 2026 04:07:11 +0000 (0:00:00.271) 0:00:07.042 ********* 2026-03-31 04:07:22.618908 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:07:22.618915 | orchestrator | 2026-03-31 04:07:22.618923 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-31 04:07:22.618930 | orchestrator | Tuesday 31 March 2026 04:07:11 +0000 (0:00:00.315) 0:00:07.358 ********* 2026-03-31 04:07:22.618937 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:07:22.618944 | orchestrator | 2026-03-31 04:07:22.618951 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-31 04:07:22.618958 | orchestrator | Tuesday 31 March 2026 04:07:11 +0000 (0:00:00.301) 0:00:07.660 ********* 2026-03-31 04:07:22.618965 | orchestrator | 2026-03-31 04:07:22.618972 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-31 04:07:22.618979 | orchestrator | Tuesday 31 March 2026 04:07:11 +0000 (0:00:00.080) 0:00:07.740 ********* 2026-03-31 04:07:22.618987 | orchestrator | 2026-03-31 04:07:22.618994 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-31 04:07:22.619001 | orchestrator | Tuesday 31 March 2026 04:07:12 +0000 (0:00:00.077) 0:00:07.817 ********* 2026-03-31 04:07:22.619008 | orchestrator | 2026-03-31 04:07:22.619015 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-31 04:07:22.619022 | orchestrator | Tuesday 31 March 2026 04:07:12 +0000 (0:00:00.077) 0:00:07.895 ********* 2026-03-31 04:07:22.619029 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:07:22.619036 | orchestrator | 2026-03-31 04:07:22.619044 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-03-31 04:07:22.619051 | orchestrator | Tuesday 31 March 2026 04:07:12 +0000 (0:00:00.313) 0:00:08.209 ********* 2026-03-31 04:07:22.619058 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:07:22.619065 | orchestrator | 2026-03-31 04:07:22.619089 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-03-31 04:07:22.619097 | orchestrator | Tuesday 31 March 2026 04:07:12 +0000 (0:00:00.289) 0:00:08.498 ********* 2026-03-31 04:07:22.619105 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:07:22.619112 | orchestrator | 2026-03-31 04:07:22.619119 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-03-31 04:07:22.619126 | orchestrator | Tuesday 31 March 2026 04:07:12 +0000 (0:00:00.130) 0:00:08.629 ********* 2026-03-31 04:07:22.619133 | orchestrator | changed: [testbed-node-0] 2026-03-31 04:07:22.619140 | orchestrator | 2026-03-31 04:07:22.619153 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-03-31 04:07:22.619167 | orchestrator | Tuesday 31 March 2026 04:07:14 +0000 (0:00:01.684) 0:00:10.313 ********* 2026-03-31 04:07:22.619179 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:07:22.619192 | orchestrator | 2026-03-31 04:07:22.619204 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-03-31 04:07:22.619226 | orchestrator | Tuesday 31 March 2026 04:07:15 +0000 (0:00:00.623) 0:00:10.937 ********* 2026-03-31 04:07:22.619237 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:07:22.619248 | orchestrator | 2026-03-31 04:07:22.619279 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-03-31 04:07:22.619294 | orchestrator | Tuesday 31 March 2026 04:07:15 +0000 (0:00:00.155) 0:00:11.092 ********* 2026-03-31 04:07:22.619308 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:07:22.619319 | orchestrator | 2026-03-31 04:07:22.619326 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-03-31 04:07:22.619333 | orchestrator | Tuesday 31 March 2026 04:07:15 +0000 (0:00:00.395) 0:00:11.488 ********* 2026-03-31 04:07:22.619340 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:07:22.619347 | orchestrator | 2026-03-31 04:07:22.619354 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-03-31 04:07:22.619361 | orchestrator | Tuesday 31 March 2026 04:07:16 +0000 (0:00:00.385) 0:00:11.873 ********* 2026-03-31 04:07:22.619368 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:07:22.619375 | orchestrator | 2026-03-31 04:07:22.619383 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-03-31 04:07:22.619390 | orchestrator | Tuesday 31 March 2026 04:07:16 +0000 (0:00:00.143) 0:00:12.017 ********* 2026-03-31 04:07:22.619397 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:07:22.619404 | orchestrator | 2026-03-31 04:07:22.619411 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-03-31 04:07:22.619418 | orchestrator | Tuesday 31 March 2026 04:07:16 +0000 (0:00:00.144) 0:00:12.162 ********* 2026-03-31 04:07:22.619425 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:07:22.619432 | orchestrator | 2026-03-31 04:07:22.619439 | orchestrator | TASK [Gather status data] ****************************************************** 2026-03-31 04:07:22.619446 | orchestrator | Tuesday 31 March 2026 04:07:16 +0000 (0:00:00.157) 0:00:12.319 ********* 2026-03-31 04:07:22.619453 | orchestrator | changed: [testbed-node-0] 2026-03-31 04:07:22.619460 | orchestrator | 2026-03-31 04:07:22.619467 | orchestrator | TASK [Set health test data] **************************************************** 2026-03-31 04:07:22.619474 | orchestrator | Tuesday 31 March 2026 04:07:17 +0000 (0:00:01.292) 0:00:13.612 ********* 2026-03-31 04:07:22.619481 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:07:22.619488 | orchestrator | 2026-03-31 04:07:22.619520 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-03-31 04:07:22.619527 | orchestrator | Tuesday 31 March 2026 04:07:18 +0000 (0:00:00.346) 0:00:13.959 ********* 2026-03-31 04:07:22.619534 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:07:22.619542 | orchestrator | 2026-03-31 04:07:22.619549 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-03-31 04:07:22.619556 | orchestrator | Tuesday 31 March 2026 04:07:18 +0000 (0:00:00.157) 0:00:14.116 ********* 2026-03-31 04:07:22.619563 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:07:22.619570 | orchestrator | 2026-03-31 04:07:22.619578 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-03-31 04:07:22.619585 | orchestrator | Tuesday 31 March 2026 04:07:18 +0000 (0:00:00.165) 0:00:14.282 ********* 2026-03-31 04:07:22.619592 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:07:22.619604 | orchestrator | 2026-03-31 04:07:22.619611 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-03-31 04:07:22.619618 | orchestrator | Tuesday 31 March 2026 04:07:18 +0000 (0:00:00.167) 0:00:14.449 ********* 2026-03-31 04:07:22.619625 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:07:22.619633 | orchestrator | 2026-03-31 04:07:22.619640 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-31 04:07:22.619647 | orchestrator | Tuesday 31 March 2026 04:07:19 +0000 (0:00:00.409) 0:00:14.858 ********* 2026-03-31 04:07:22.619654 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-31 04:07:22.619661 | orchestrator | 2026-03-31 04:07:22.619669 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-31 04:07:22.619682 | orchestrator | Tuesday 31 March 2026 04:07:19 +0000 (0:00:00.315) 0:00:15.174 ********* 2026-03-31 04:07:22.619689 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:07:22.619696 | orchestrator | 2026-03-31 04:07:22.619704 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-31 04:07:22.619711 | orchestrator | Tuesday 31 March 2026 04:07:19 +0000 (0:00:00.335) 0:00:15.509 ********* 2026-03-31 04:07:22.619718 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-31 04:07:22.619725 | orchestrator | 2026-03-31 04:07:22.619732 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-31 04:07:22.619739 | orchestrator | Tuesday 31 March 2026 04:07:21 +0000 (0:00:02.028) 0:00:17.538 ********* 2026-03-31 04:07:22.619747 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-31 04:07:22.619754 | orchestrator | 2026-03-31 04:07:22.619761 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-31 04:07:22.619768 | orchestrator | Tuesday 31 March 2026 04:07:22 +0000 (0:00:00.313) 0:00:17.852 ********* 2026-03-31 04:07:22.619775 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-31 04:07:22.619782 | orchestrator | 2026-03-31 04:07:22.619797 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-31 04:07:25.699074 | orchestrator | Tuesday 31 March 2026 04:07:22 +0000 (0:00:00.285) 0:00:18.137 ********* 2026-03-31 04:07:25.699208 | orchestrator | 2026-03-31 04:07:25.699237 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-31 04:07:25.699257 | orchestrator | Tuesday 31 March 2026 04:07:22 +0000 (0:00:00.097) 0:00:18.235 ********* 2026-03-31 04:07:25.699275 | orchestrator | 2026-03-31 04:07:25.699290 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-31 04:07:25.699309 | orchestrator | Tuesday 31 March 2026 04:07:22 +0000 (0:00:00.072) 0:00:18.307 ********* 2026-03-31 04:07:25.699326 | orchestrator | 2026-03-31 04:07:25.699342 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-31 04:07:25.699358 | orchestrator | Tuesday 31 March 2026 04:07:22 +0000 (0:00:00.074) 0:00:18.382 ********* 2026-03-31 04:07:25.699376 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-31 04:07:25.699393 | orchestrator | 2026-03-31 04:07:25.699412 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-31 04:07:25.699428 | orchestrator | Tuesday 31 March 2026 04:07:24 +0000 (0:00:01.723) 0:00:20.105 ********* 2026-03-31 04:07:25.699444 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-03-31 04:07:25.699463 | orchestrator |  "msg": [ 2026-03-31 04:07:25.699481 | orchestrator |  "Validator run completed.", 2026-03-31 04:07:25.699569 | orchestrator |  "You can find the report file here:", 2026-03-31 04:07:25.699589 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-03-31T04:07:05+00:00-report.json", 2026-03-31 04:07:25.699611 | orchestrator |  "on the following host:", 2026-03-31 04:07:25.699631 | orchestrator |  "testbed-manager" 2026-03-31 04:07:25.699650 | orchestrator |  ] 2026-03-31 04:07:25.699668 | orchestrator | } 2026-03-31 04:07:25.699687 | orchestrator | 2026-03-31 04:07:25.699707 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 04:07:25.699727 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-03-31 04:07:25.699765 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 04:07:25.699794 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 04:07:25.699805 | orchestrator | 2026-03-31 04:07:25.699816 | orchestrator | 2026-03-31 04:07:25.699827 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 04:07:25.699874 | orchestrator | Tuesday 31 March 2026 04:07:25 +0000 (0:00:00.921) 0:00:21.026 ********* 2026-03-31 04:07:25.699886 | orchestrator | =============================================================================== 2026-03-31 04:07:25.699897 | orchestrator | Aggregate test results step one ----------------------------------------- 2.03s 2026-03-31 04:07:25.699907 | orchestrator | Write report file ------------------------------------------------------- 1.72s 2026-03-31 04:07:25.699918 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.68s 2026-03-31 04:07:25.699929 | orchestrator | Create report output directory ------------------------------------------ 1.31s 2026-03-31 04:07:25.699940 | orchestrator | Gather status data ------------------------------------------------------ 1.29s 2026-03-31 04:07:25.699950 | orchestrator | Get container info ------------------------------------------------------ 1.10s 2026-03-31 04:07:25.699961 | orchestrator | Print report file information ------------------------------------------- 0.92s 2026-03-31 04:07:25.699972 | orchestrator | Get timestamp for report file ------------------------------------------- 0.86s 2026-03-31 04:07:25.699997 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.68s 2026-03-31 04:07:25.700009 | orchestrator | Set quorum test data ---------------------------------------------------- 0.62s 2026-03-31 04:07:25.700020 | orchestrator | Set test result to passed if container is existing ---------------------- 0.61s 2026-03-31 04:07:25.700030 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.41s 2026-03-31 04:07:25.700041 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.40s 2026-03-31 04:07:25.700052 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.40s 2026-03-31 04:07:25.700063 | orchestrator | Prepare test data ------------------------------------------------------- 0.39s 2026-03-31 04:07:25.700073 | orchestrator | Prepare test data for container existance test -------------------------- 0.39s 2026-03-31 04:07:25.700084 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.39s 2026-03-31 04:07:25.700095 | orchestrator | Set test result to failed if container is missing ----------------------- 0.37s 2026-03-31 04:07:25.700105 | orchestrator | Set health test data ---------------------------------------------------- 0.35s 2026-03-31 04:07:25.700116 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.34s 2026-03-31 04:07:26.085839 | orchestrator | + osism validate ceph-mgrs 2026-03-31 04:07:58.427627 | orchestrator | 2026-03-31 04:07:58.427763 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-03-31 04:07:58.427791 | orchestrator | 2026-03-31 04:07:58.427812 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-31 04:07:58.427830 | orchestrator | Tuesday 31 March 2026 04:07:43 +0000 (0:00:00.554) 0:00:00.554 ********* 2026-03-31 04:07:58.427843 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-31 04:07:58.427854 | orchestrator | 2026-03-31 04:07:58.427865 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-31 04:07:58.427876 | orchestrator | Tuesday 31 March 2026 04:07:44 +0000 (0:00:00.886) 0:00:01.441 ********* 2026-03-31 04:07:58.427887 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-31 04:07:58.427898 | orchestrator | 2026-03-31 04:07:58.427909 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-31 04:07:58.427920 | orchestrator | Tuesday 31 March 2026 04:07:45 +0000 (0:00:01.061) 0:00:02.502 ********* 2026-03-31 04:07:58.427931 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:07:58.427943 | orchestrator | 2026-03-31 04:07:58.427954 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-03-31 04:07:58.427965 | orchestrator | Tuesday 31 March 2026 04:07:45 +0000 (0:00:00.152) 0:00:02.655 ********* 2026-03-31 04:07:58.427976 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:07:58.427992 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:07:58.428010 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:07:58.428028 | orchestrator | 2026-03-31 04:07:58.428066 | orchestrator | TASK [Get container info] ****************************************************** 2026-03-31 04:07:58.428077 | orchestrator | Tuesday 31 March 2026 04:07:45 +0000 (0:00:00.337) 0:00:02.992 ********* 2026-03-31 04:07:58.428090 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:07:58.428102 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:07:58.428115 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:07:58.428127 | orchestrator | 2026-03-31 04:07:58.428140 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-03-31 04:07:58.428153 | orchestrator | Tuesday 31 March 2026 04:07:46 +0000 (0:00:00.970) 0:00:03.963 ********* 2026-03-31 04:07:58.428165 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:07:58.428178 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:07:58.428191 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:07:58.428203 | orchestrator | 2026-03-31 04:07:58.428215 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-03-31 04:07:58.428228 | orchestrator | Tuesday 31 March 2026 04:07:47 +0000 (0:00:00.359) 0:00:04.323 ********* 2026-03-31 04:07:58.428240 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:07:58.428254 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:07:58.428266 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:07:58.428279 | orchestrator | 2026-03-31 04:07:58.428293 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-31 04:07:58.428305 | orchestrator | Tuesday 31 March 2026 04:07:47 +0000 (0:00:00.559) 0:00:04.882 ********* 2026-03-31 04:07:58.428318 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:07:58.428335 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:07:58.428354 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:07:58.428374 | orchestrator | 2026-03-31 04:07:58.428394 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-03-31 04:07:58.428414 | orchestrator | Tuesday 31 March 2026 04:07:48 +0000 (0:00:00.353) 0:00:05.236 ********* 2026-03-31 04:07:58.428434 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:07:58.428480 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:07:58.428500 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:07:58.428520 | orchestrator | 2026-03-31 04:07:58.428539 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-03-31 04:07:58.428556 | orchestrator | Tuesday 31 March 2026 04:07:48 +0000 (0:00:00.301) 0:00:05.537 ********* 2026-03-31 04:07:58.428568 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:07:58.428578 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:07:58.428589 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:07:58.428600 | orchestrator | 2026-03-31 04:07:58.428610 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-31 04:07:58.428622 | orchestrator | Tuesday 31 March 2026 04:07:48 +0000 (0:00:00.394) 0:00:05.932 ********* 2026-03-31 04:07:58.428632 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:07:58.428643 | orchestrator | 2026-03-31 04:07:58.428654 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-31 04:07:58.428664 | orchestrator | Tuesday 31 March 2026 04:07:48 +0000 (0:00:00.238) 0:00:06.170 ********* 2026-03-31 04:07:58.428675 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:07:58.428686 | orchestrator | 2026-03-31 04:07:58.428696 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-31 04:07:58.428707 | orchestrator | Tuesday 31 March 2026 04:07:49 +0000 (0:00:00.241) 0:00:06.411 ********* 2026-03-31 04:07:58.428718 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:07:58.428729 | orchestrator | 2026-03-31 04:07:58.428740 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-31 04:07:58.428751 | orchestrator | Tuesday 31 March 2026 04:07:49 +0000 (0:00:00.273) 0:00:06.685 ********* 2026-03-31 04:07:58.428762 | orchestrator | 2026-03-31 04:07:58.428772 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-31 04:07:58.428783 | orchestrator | Tuesday 31 March 2026 04:07:49 +0000 (0:00:00.072) 0:00:06.757 ********* 2026-03-31 04:07:58.428794 | orchestrator | 2026-03-31 04:07:58.428814 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-31 04:07:58.428825 | orchestrator | Tuesday 31 March 2026 04:07:49 +0000 (0:00:00.066) 0:00:06.824 ********* 2026-03-31 04:07:58.428836 | orchestrator | 2026-03-31 04:07:58.428847 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-31 04:07:58.428857 | orchestrator | Tuesday 31 March 2026 04:07:49 +0000 (0:00:00.070) 0:00:06.894 ********* 2026-03-31 04:07:58.428868 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:07:58.428879 | orchestrator | 2026-03-31 04:07:58.428890 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-03-31 04:07:58.428900 | orchestrator | Tuesday 31 March 2026 04:07:49 +0000 (0:00:00.245) 0:00:07.140 ********* 2026-03-31 04:07:58.428911 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:07:58.428922 | orchestrator | 2026-03-31 04:07:58.428952 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-03-31 04:07:58.428970 | orchestrator | Tuesday 31 March 2026 04:07:50 +0000 (0:00:00.374) 0:00:07.514 ********* 2026-03-31 04:07:58.428988 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:07:58.429007 | orchestrator | 2026-03-31 04:07:58.429025 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-03-31 04:07:58.429040 | orchestrator | Tuesday 31 March 2026 04:07:50 +0000 (0:00:00.112) 0:00:07.627 ********* 2026-03-31 04:07:58.429051 | orchestrator | changed: [testbed-node-0] 2026-03-31 04:07:58.429062 | orchestrator | 2026-03-31 04:07:58.429072 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-03-31 04:07:58.429083 | orchestrator | Tuesday 31 March 2026 04:07:52 +0000 (0:00:01.923) 0:00:09.550 ********* 2026-03-31 04:07:58.429094 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:07:58.429105 | orchestrator | 2026-03-31 04:07:58.429116 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-03-31 04:07:58.429126 | orchestrator | Tuesday 31 March 2026 04:07:52 +0000 (0:00:00.468) 0:00:10.019 ********* 2026-03-31 04:07:58.429137 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:07:58.429148 | orchestrator | 2026-03-31 04:07:58.429159 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-03-31 04:07:58.429169 | orchestrator | Tuesday 31 March 2026 04:07:53 +0000 (0:00:00.319) 0:00:10.339 ********* 2026-03-31 04:07:58.429180 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:07:58.429190 | orchestrator | 2026-03-31 04:07:58.429201 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-03-31 04:07:58.429212 | orchestrator | Tuesday 31 March 2026 04:07:53 +0000 (0:00:00.174) 0:00:10.513 ********* 2026-03-31 04:07:58.429223 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:07:58.429234 | orchestrator | 2026-03-31 04:07:58.429244 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-31 04:07:58.429255 | orchestrator | Tuesday 31 March 2026 04:07:53 +0000 (0:00:00.153) 0:00:10.667 ********* 2026-03-31 04:07:58.429266 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-31 04:07:58.429276 | orchestrator | 2026-03-31 04:07:58.429287 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-31 04:07:58.429298 | orchestrator | Tuesday 31 March 2026 04:07:53 +0000 (0:00:00.261) 0:00:10.929 ********* 2026-03-31 04:07:58.429308 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:07:58.429319 | orchestrator | 2026-03-31 04:07:58.429330 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-31 04:07:58.429360 | orchestrator | Tuesday 31 March 2026 04:07:54 +0000 (0:00:00.296) 0:00:11.226 ********* 2026-03-31 04:07:58.429375 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-31 04:07:58.429393 | orchestrator | 2026-03-31 04:07:58.429412 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-31 04:07:58.429430 | orchestrator | Tuesday 31 March 2026 04:07:55 +0000 (0:00:01.568) 0:00:12.794 ********* 2026-03-31 04:07:58.429473 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-31 04:07:58.429505 | orchestrator | 2026-03-31 04:07:58.429524 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-31 04:07:58.429539 | orchestrator | Tuesday 31 March 2026 04:07:55 +0000 (0:00:00.285) 0:00:13.079 ********* 2026-03-31 04:07:58.429550 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-31 04:07:58.429561 | orchestrator | 2026-03-31 04:07:58.429571 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-31 04:07:58.429582 | orchestrator | Tuesday 31 March 2026 04:07:56 +0000 (0:00:00.268) 0:00:13.348 ********* 2026-03-31 04:07:58.429593 | orchestrator | 2026-03-31 04:07:58.429603 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-31 04:07:58.429614 | orchestrator | Tuesday 31 March 2026 04:07:56 +0000 (0:00:00.073) 0:00:13.422 ********* 2026-03-31 04:07:58.429625 | orchestrator | 2026-03-31 04:07:58.429644 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-31 04:07:58.429662 | orchestrator | Tuesday 31 March 2026 04:07:56 +0000 (0:00:00.070) 0:00:13.492 ********* 2026-03-31 04:07:58.429681 | orchestrator | 2026-03-31 04:07:58.429700 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-31 04:07:58.429717 | orchestrator | Tuesday 31 March 2026 04:07:56 +0000 (0:00:00.291) 0:00:13.783 ********* 2026-03-31 04:07:58.429728 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-31 04:07:58.429739 | orchestrator | 2026-03-31 04:07:58.429749 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-31 04:07:58.429767 | orchestrator | Tuesday 31 March 2026 04:07:57 +0000 (0:00:01.383) 0:00:15.167 ********* 2026-03-31 04:07:58.429778 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-03-31 04:07:58.429789 | orchestrator |  "msg": [ 2026-03-31 04:07:58.429800 | orchestrator |  "Validator run completed.", 2026-03-31 04:07:58.429816 | orchestrator |  "You can find the report file here:", 2026-03-31 04:07:58.429834 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-03-31T04:07:44+00:00-report.json", 2026-03-31 04:07:58.429854 | orchestrator |  "on the following host:", 2026-03-31 04:07:58.429873 | orchestrator |  "testbed-manager" 2026-03-31 04:07:58.429892 | orchestrator |  ] 2026-03-31 04:07:58.429908 | orchestrator | } 2026-03-31 04:07:58.429928 | orchestrator | 2026-03-31 04:07:58.429946 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 04:07:58.430123 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-31 04:07:58.430149 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 04:07:58.430187 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 04:07:58.875731 | orchestrator | 2026-03-31 04:07:58.875829 | orchestrator | 2026-03-31 04:07:58.875843 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 04:07:58.875856 | orchestrator | Tuesday 31 March 2026 04:07:58 +0000 (0:00:00.455) 0:00:15.623 ********* 2026-03-31 04:07:58.875866 | orchestrator | =============================================================================== 2026-03-31 04:07:58.875875 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.92s 2026-03-31 04:07:58.875885 | orchestrator | Aggregate test results step one ----------------------------------------- 1.57s 2026-03-31 04:07:58.875895 | orchestrator | Write report file ------------------------------------------------------- 1.38s 2026-03-31 04:07:58.875905 | orchestrator | Create report output directory ------------------------------------------ 1.06s 2026-03-31 04:07:58.875914 | orchestrator | Get container info ------------------------------------------------------ 0.97s 2026-03-31 04:07:58.875924 | orchestrator | Get timestamp for report file ------------------------------------------- 0.89s 2026-03-31 04:07:58.875959 | orchestrator | Set test result to passed if container is existing ---------------------- 0.56s 2026-03-31 04:07:58.875969 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.47s 2026-03-31 04:07:58.875979 | orchestrator | Print report file information ------------------------------------------- 0.46s 2026-03-31 04:07:58.875989 | orchestrator | Flush handlers ---------------------------------------------------------- 0.44s 2026-03-31 04:07:58.875999 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.39s 2026-03-31 04:07:58.876009 | orchestrator | Fail due to missing containers ------------------------------------------ 0.37s 2026-03-31 04:07:58.876018 | orchestrator | Set test result to failed if container is missing ----------------------- 0.36s 2026-03-31 04:07:58.876027 | orchestrator | Prepare test data ------------------------------------------------------- 0.35s 2026-03-31 04:07:58.876037 | orchestrator | Prepare test data for container existance test -------------------------- 0.34s 2026-03-31 04:07:58.876047 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.32s 2026-03-31 04:07:58.876056 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.30s 2026-03-31 04:07:58.876066 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.30s 2026-03-31 04:07:58.876075 | orchestrator | Aggregate test results step two ----------------------------------------- 0.29s 2026-03-31 04:07:58.876085 | orchestrator | Aggregate test results step three --------------------------------------- 0.27s 2026-03-31 04:07:59.249273 | orchestrator | + osism validate ceph-osds 2026-03-31 04:08:22.267455 | orchestrator | 2026-03-31 04:08:22.267565 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-03-31 04:08:22.267580 | orchestrator | 2026-03-31 04:08:22.267591 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-31 04:08:22.267601 | orchestrator | Tuesday 31 March 2026 04:08:17 +0000 (0:00:00.494) 0:00:00.494 ********* 2026-03-31 04:08:22.267612 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-31 04:08:22.267622 | orchestrator | 2026-03-31 04:08:22.267631 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-31 04:08:22.267641 | orchestrator | Tuesday 31 March 2026 04:08:17 +0000 (0:00:00.940) 0:00:01.435 ********* 2026-03-31 04:08:22.267651 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-31 04:08:22.267661 | orchestrator | 2026-03-31 04:08:22.267670 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-31 04:08:22.267680 | orchestrator | Tuesday 31 March 2026 04:08:18 +0000 (0:00:00.661) 0:00:02.096 ********* 2026-03-31 04:08:22.267689 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-31 04:08:22.267699 | orchestrator | 2026-03-31 04:08:22.267715 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-31 04:08:22.267731 | orchestrator | Tuesday 31 March 2026 04:08:19 +0000 (0:00:00.855) 0:00:02.951 ********* 2026-03-31 04:08:22.267748 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:08:22.267765 | orchestrator | 2026-03-31 04:08:22.267783 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-03-31 04:08:22.267799 | orchestrator | Tuesday 31 March 2026 04:08:19 +0000 (0:00:00.158) 0:00:03.109 ********* 2026-03-31 04:08:22.267816 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:08:22.267830 | orchestrator | 2026-03-31 04:08:22.267844 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-03-31 04:08:22.267879 | orchestrator | Tuesday 31 March 2026 04:08:19 +0000 (0:00:00.162) 0:00:03.272 ********* 2026-03-31 04:08:22.267897 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:08:22.267910 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:08:22.267924 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:08:22.267938 | orchestrator | 2026-03-31 04:08:22.267952 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-03-31 04:08:22.267967 | orchestrator | Tuesday 31 March 2026 04:08:20 +0000 (0:00:00.408) 0:00:03.681 ********* 2026-03-31 04:08:22.267983 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:08:22.268027 | orchestrator | 2026-03-31 04:08:22.268046 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-03-31 04:08:22.268064 | orchestrator | Tuesday 31 March 2026 04:08:20 +0000 (0:00:00.153) 0:00:03.834 ********* 2026-03-31 04:08:22.268081 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:08:22.268097 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:08:22.268114 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:08:22.268130 | orchestrator | 2026-03-31 04:08:22.268146 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-03-31 04:08:22.268161 | orchestrator | Tuesday 31 March 2026 04:08:20 +0000 (0:00:00.373) 0:00:04.208 ********* 2026-03-31 04:08:22.268177 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:08:22.268192 | orchestrator | 2026-03-31 04:08:22.268209 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-31 04:08:22.268225 | orchestrator | Tuesday 31 March 2026 04:08:21 +0000 (0:00:00.842) 0:00:05.050 ********* 2026-03-31 04:08:22.268240 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:08:22.268258 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:08:22.268275 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:08:22.268290 | orchestrator | 2026-03-31 04:08:22.268305 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-03-31 04:08:22.268322 | orchestrator | Tuesday 31 March 2026 04:08:21 +0000 (0:00:00.339) 0:00:05.390 ********* 2026-03-31 04:08:22.268341 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7515b0867c958991488dd2951fb6307eb0f71260257081f27e14d34f33a4afcd', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-03-31 04:08:22.268361 | orchestrator | skipping: [testbed-node-3] => (item={'id': '69a8f4cbf2ebbfa2775886223253a775846a041d3c469d8505e3b10cfa95f130', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-03-31 04:08:22.268380 | orchestrator | skipping: [testbed-node-3] => (item={'id': '536bbc18925c93d0c0a029ffd058e0db3fb2cd89424b3b3e6eac1264f6e862e9', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-03-31 04:08:22.268399 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b6bc6ad770f94aaa990cf9039662759b6115ee8551a0e0e8b332fa1bc80e7339', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-03-31 04:08:22.268416 | orchestrator | skipping: [testbed-node-3] => (item={'id': '41924980f6815db2ca3b99bcd2aa566c35dcd14c1eda2f7bd0436e28e232519a', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-03-31 04:08:22.268497 | orchestrator | skipping: [testbed-node-3] => (item={'id': '05848a45eb05debcd4323c9f0b5a9d1638bd80a400322167cd54afe9e78b6b58', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-03-31 04:08:22.268516 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'eff4f7e5bc3009b896275d5c0974de47fc4b5a1bafa9ddda97b6e6a05026b034', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-03-31 04:08:22.268532 | orchestrator | skipping: [testbed-node-3] => (item={'id': '58105fcbcc2f2067cbca662b6d05ac53d8498e312bd193ec95831fb46f3266b2', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 49 minutes (healthy)'})  2026-03-31 04:08:22.268550 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f952ca5eae7c12f61cdf2df8ae2076a1f84ea6ea3e49204feca5985e2ae9ffac', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-03-31 04:08:22.268592 | orchestrator | skipping: [testbed-node-3] => (item={'id': '33aa6e2549e7d126837a48eb29639f6e7c7b96818fe747ec4fc82b1270346e52', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-03-31 04:08:22.268607 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4035aa2ce1c649c8c128697714b69e7cbeacc35984e7a33e946b4d9c6f7027dc', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-03-31 04:08:22.268619 | orchestrator | ok: [testbed-node-3] => (item={'id': '032c34613aa8a766697bf26ea58c8f44e5bd387ed1c89f2ffcac4676b28b9423', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-31 04:08:22.268629 | orchestrator | ok: [testbed-node-3] => (item={'id': '86451f4df65154869444d72287f4e20bd25d58bdde52f5bceb70a8eb6ffaf1ab', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-31 04:08:22.268639 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a91fd4cca4aa856a07b3b9874eb6a9f63be57f599d2e20ad6c026b89c9ccf56f', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-03-31 04:08:22.268649 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f9db0824448393c95b5a7dbe5858512929a18a6a56408d7d12f4de4c4fd94757', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-31 04:08:22.268659 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a0b047a07c6e3ade6d599789fbc1aa8bd1ddacb64754809e7611933580a069d3', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-31 04:08:22.268670 | orchestrator | skipping: [testbed-node-3] => (item={'id': '03e34ea48849ecf5a570f61feb8bf150b0c8f4786f3db15a72e3aae8ff9186cc', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-31 04:08:22.268680 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b0356c3bd8d4ec80b78ef9d6ed52fa2c50ebd6ad7c0339bc5e9f1e94bfa72853', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-31 04:08:22.268690 | orchestrator | skipping: [testbed-node-4] => (item={'id': '57d83d2d7f3fa308557f3d87762462f8df50417700e8046843f213d6bba14024', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-03-31 04:08:22.268699 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b087f3136926330172c4b2bc2347a920bc746f358dd77fd18384d625a5075e9f', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-31 04:08:22.268718 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a2df11cf9627dbda626a5e5f788e1ee848284805f39e1947f6144172fc87feb8', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-03-31 04:08:22.575521 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b83bbf702709c083e4fe7cd8f807ebec0e3af00e2f0b290f391c98975dced648', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-03-31 04:08:22.575648 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1675c5c6204cfa2a1070a0f8cbf55bdc7bc7a4698d9ad015fb695773878a9dca', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-03-31 04:08:22.575663 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c212f07b914ecf65a83a1f69350b95e7bdc1563ca3a5af2a77acd38417cf9e5a', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-03-31 04:08:22.575677 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c2e11327253d9f7b85f35cd959a352b4d39a55cd1a60a099365090aa7c7c4589', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-03-31 04:08:22.575687 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'acfc69a7787e32040ed41d08a3dc32bade8822c1beb2585969f9584d22768580', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-03-31 04:08:22.575741 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd917ab9e53126bbeb1906c4b054a334a3ea5a50002b3a1332472e46c50e58b4f', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 49 minutes (healthy)'})  2026-03-31 04:08:22.575753 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'eb7e966b2e42763f47a605bb2ee8f1398b0fbc4e3719a35c6b2f4a17631d2c78', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-03-31 04:08:22.575764 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b013387b516de05decf7bc047a360ca6ed0cf369962fb4ee23cb2a7cded0bfe4', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-03-31 04:08:22.575775 | orchestrator | skipping: [testbed-node-4] => (item={'id': '33d58aea1598346c9adb442c78110624c3f79f6375458e1baebc08c2fa3db5b2', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-03-31 04:08:22.575787 | orchestrator | ok: [testbed-node-4] => (item={'id': '0427e73b1d83f8c0092c95efd8e0b11769d1460218a2a17f4c7864c15d0cd9d3', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-31 04:08:22.575798 | orchestrator | ok: [testbed-node-4] => (item={'id': '85e2115073c252cc52617ddaa39cc82417a147671f052974ee20a0d2e8d9cd85', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-31 04:08:22.575808 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'bd5dab456ea4b00772e9c95a6d38e478723bd7dc2447cab29252f14dacefb380', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-03-31 04:08:22.575818 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8eba98377641b549af56bb297683c97dca8d8f22becb01c5e5625ba320e7477a', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-31 04:08:22.575828 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c7f1418c18eb6c45a2a804a9d425eaeecbb4e02b67b2f3cf4d2380cc4db3e140', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-31 04:08:22.575856 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'cd76dea3483e4dc99ca9eb552d6b835f67136f969a7b553b5e1fc8fbb9337b9d', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-31 04:08:22.575875 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ffd214de6742a50b4c215cf245f156698dbfd40972390cc5683eb9449a27fbe1', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-31 04:08:22.575886 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3cc543b6c4aeb1a98b69fb623faa875b378bb3a41d8d79877a23efcd29ef1e2f', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-31 04:08:22.575896 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'bdb4ac2d79f2e5b3db6ec0a2f975a0d60c304d0db03aae143d33d3c8429aa933', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-03-31 04:08:22.575910 | orchestrator | skipping: [testbed-node-5] => (item={'id': '83211ea755d8169a4b68715735230e8fa4e075378ccfdca201dbd71f81baadfe', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-03-31 04:08:22.575921 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b6ce126115f60d5c9ad932f2d9e2d2a0a412cd5384b486d6576ffc4a6033c58b', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-03-31 04:08:22.575931 | orchestrator | skipping: [testbed-node-5] => (item={'id': '109636b7042b641aedf79ef0170eb5ec5443155da112cf6869d7ad8fea317e9a', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-03-31 04:08:22.575941 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'dd53309781fec20025e41dea48d6c934da3e1cb909e9f8533b45f5ddbfe44749', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-03-31 04:08:22.575951 | orchestrator | skipping: [testbed-node-5] => (item={'id': '54b7b74cf9d95018d5ae359cbbef169d50613beac11d19da42e90f7cbc0e32d0', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-03-31 04:08:22.575961 | orchestrator | skipping: [testbed-node-5] => (item={'id': '292e8aa63fba382dd7a01280a06aae9fb3f42b6d22c3a928e3bb0730c76236d0', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-03-31 04:08:22.575971 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd6b92adce35bc8751c6366ca275314da38a9d74ad3e2ec300c2e06f3808a52d1', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 49 minutes (healthy)'})  2026-03-31 04:08:22.575981 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9eb7071cc1dee41bf54d35e74e66f93b03da5a53dc8552cc920f13fdf15c000f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-03-31 04:08:22.575991 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f068f6fed15793f3fc6dcd0eb921f83488a0a0c0543e695c765b40ea446770b4', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-03-31 04:08:22.576001 | orchestrator | skipping: [testbed-node-5] => (item={'id': '91a4c80261d38ff468558c18b9aefcf66742b13b396de919c89221c00b1d1cb1', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-03-31 04:08:22.576018 | orchestrator | ok: [testbed-node-5] => (item={'id': '17a98ffe6744e821ee4ac082982c2cc74f4af391f61a0975d5fc50731e2e0183', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-31 04:08:22.576037 | orchestrator | ok: [testbed-node-5] => (item={'id': 'b4933265c83be828e5b95d36daab1c9c80bf6b0f71a633de08f4d997167ad266', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-31 04:08:35.625242 | orchestrator | skipping: [testbed-node-5] => (item={'id': '46dc92484504699d5dafd0667954c5383fea684f1a79eb3102272827d1c593be', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-03-31 04:08:35.625325 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e3e1c0179814345d74f08aee3911c69665ce047d325f76d41af564d66112b0a4', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-31 04:08:35.625334 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f175b69770c5735c5367f5bd1193b49a7cfbbfb9064d5a4573c839df09b2005b', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-31 04:08:35.625356 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a5c9452c766e84d2b86fb11a8d376de60289318530a090654345c37fc708ae24', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-31 04:08:35.625362 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9675eda71550d7bf3faf162094af99bbf7a3806487f8931790f0685a959ec6ee', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-31 04:08:35.625367 | orchestrator | skipping: [testbed-node-5] => (item={'id': '88cb02af95df9440ce1bf5e04589bdc97d698430cdf65e29ee3f8c1d09f9be00', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-31 04:08:35.625371 | orchestrator | 2026-03-31 04:08:35.625376 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-03-31 04:08:35.625381 | orchestrator | Tuesday 31 March 2026 04:08:22 +0000 (0:00:00.617) 0:00:06.007 ********* 2026-03-31 04:08:35.625386 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:08:35.625390 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:08:35.625394 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:08:35.625398 | orchestrator | 2026-03-31 04:08:35.625423 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-03-31 04:08:35.625429 | orchestrator | Tuesday 31 March 2026 04:08:22 +0000 (0:00:00.323) 0:00:06.330 ********* 2026-03-31 04:08:35.625433 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:08:35.625438 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:08:35.625442 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:08:35.625446 | orchestrator | 2026-03-31 04:08:35.625450 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-03-31 04:08:35.625454 | orchestrator | Tuesday 31 March 2026 04:08:23 +0000 (0:00:00.561) 0:00:06.891 ********* 2026-03-31 04:08:35.625457 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:08:35.625461 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:08:35.625465 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:08:35.625469 | orchestrator | 2026-03-31 04:08:35.625473 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-31 04:08:35.625477 | orchestrator | Tuesday 31 March 2026 04:08:23 +0000 (0:00:00.356) 0:00:07.248 ********* 2026-03-31 04:08:35.625494 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:08:35.625498 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:08:35.625502 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:08:35.625506 | orchestrator | 2026-03-31 04:08:35.625510 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-03-31 04:08:35.625513 | orchestrator | Tuesday 31 March 2026 04:08:24 +0000 (0:00:00.376) 0:00:07.624 ********* 2026-03-31 04:08:35.625518 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-03-31 04:08:35.625523 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-03-31 04:08:35.625527 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:08:35.625531 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-03-31 04:08:35.625534 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-03-31 04:08:35.625538 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:08:35.625542 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-03-31 04:08:35.625546 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-03-31 04:08:35.625550 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:08:35.625554 | orchestrator | 2026-03-31 04:08:35.625558 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-03-31 04:08:35.625562 | orchestrator | Tuesday 31 March 2026 04:08:24 +0000 (0:00:00.417) 0:00:08.042 ********* 2026-03-31 04:08:35.625566 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:08:35.625570 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:08:35.625574 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:08:35.625578 | orchestrator | 2026-03-31 04:08:35.625582 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-03-31 04:08:35.625585 | orchestrator | Tuesday 31 March 2026 04:08:25 +0000 (0:00:00.552) 0:00:08.594 ********* 2026-03-31 04:08:35.625589 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:08:35.625604 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:08:35.625608 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:08:35.625612 | orchestrator | 2026-03-31 04:08:35.625616 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-03-31 04:08:35.625620 | orchestrator | Tuesday 31 March 2026 04:08:25 +0000 (0:00:00.337) 0:00:08.932 ********* 2026-03-31 04:08:35.625624 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:08:35.625628 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:08:35.625632 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:08:35.625636 | orchestrator | 2026-03-31 04:08:35.625640 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-03-31 04:08:35.625643 | orchestrator | Tuesday 31 March 2026 04:08:25 +0000 (0:00:00.356) 0:00:09.288 ********* 2026-03-31 04:08:35.625647 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:08:35.625651 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:08:35.625655 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:08:35.625659 | orchestrator | 2026-03-31 04:08:35.625663 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-31 04:08:35.625667 | orchestrator | Tuesday 31 March 2026 04:08:26 +0000 (0:00:00.316) 0:00:09.605 ********* 2026-03-31 04:08:35.625671 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:08:35.625675 | orchestrator | 2026-03-31 04:08:35.625679 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-31 04:08:35.625683 | orchestrator | Tuesday 31 March 2026 04:08:27 +0000 (0:00:00.910) 0:00:10.516 ********* 2026-03-31 04:08:35.625690 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:08:35.625694 | orchestrator | 2026-03-31 04:08:35.625698 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-31 04:08:35.625702 | orchestrator | Tuesday 31 March 2026 04:08:27 +0000 (0:00:00.285) 0:00:10.801 ********* 2026-03-31 04:08:35.625710 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:08:35.625714 | orchestrator | 2026-03-31 04:08:35.625718 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-31 04:08:35.625722 | orchestrator | Tuesday 31 March 2026 04:08:27 +0000 (0:00:00.306) 0:00:11.108 ********* 2026-03-31 04:08:35.625725 | orchestrator | 2026-03-31 04:08:35.625729 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-31 04:08:35.625733 | orchestrator | Tuesday 31 March 2026 04:08:27 +0000 (0:00:00.081) 0:00:11.189 ********* 2026-03-31 04:08:35.625737 | orchestrator | 2026-03-31 04:08:35.625741 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-31 04:08:35.625745 | orchestrator | Tuesday 31 March 2026 04:08:27 +0000 (0:00:00.075) 0:00:11.265 ********* 2026-03-31 04:08:35.625749 | orchestrator | 2026-03-31 04:08:35.625753 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-31 04:08:35.625764 | orchestrator | Tuesday 31 March 2026 04:08:27 +0000 (0:00:00.079) 0:00:11.345 ********* 2026-03-31 04:08:35.625768 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:08:35.625772 | orchestrator | 2026-03-31 04:08:35.625776 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-03-31 04:08:35.625780 | orchestrator | Tuesday 31 March 2026 04:08:28 +0000 (0:00:00.310) 0:00:11.655 ********* 2026-03-31 04:08:35.625790 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:08:35.625794 | orchestrator | 2026-03-31 04:08:35.625798 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-31 04:08:35.625802 | orchestrator | Tuesday 31 March 2026 04:08:28 +0000 (0:00:00.305) 0:00:11.960 ********* 2026-03-31 04:08:35.625806 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:08:35.625810 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:08:35.625814 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:08:35.625818 | orchestrator | 2026-03-31 04:08:35.625822 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-03-31 04:08:35.625826 | orchestrator | Tuesday 31 March 2026 04:08:28 +0000 (0:00:00.391) 0:00:12.352 ********* 2026-03-31 04:08:35.625829 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:08:35.625833 | orchestrator | 2026-03-31 04:08:35.625837 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-03-31 04:08:35.625841 | orchestrator | Tuesday 31 March 2026 04:08:29 +0000 (0:00:01.005) 0:00:13.358 ********* 2026-03-31 04:08:35.625845 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-31 04:08:35.625849 | orchestrator | 2026-03-31 04:08:35.625853 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-03-31 04:08:35.625857 | orchestrator | Tuesday 31 March 2026 04:08:31 +0000 (0:00:01.750) 0:00:15.108 ********* 2026-03-31 04:08:35.625861 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:08:35.625865 | orchestrator | 2026-03-31 04:08:35.625869 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-03-31 04:08:35.625873 | orchestrator | Tuesday 31 March 2026 04:08:31 +0000 (0:00:00.196) 0:00:15.305 ********* 2026-03-31 04:08:35.625877 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:08:35.625881 | orchestrator | 2026-03-31 04:08:35.625885 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-03-31 04:08:35.625889 | orchestrator | Tuesday 31 March 2026 04:08:32 +0000 (0:00:00.395) 0:00:15.700 ********* 2026-03-31 04:08:35.625893 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:08:35.625897 | orchestrator | 2026-03-31 04:08:35.625901 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-03-31 04:08:35.625905 | orchestrator | Tuesday 31 March 2026 04:08:32 +0000 (0:00:00.162) 0:00:15.862 ********* 2026-03-31 04:08:35.625909 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:08:35.625913 | orchestrator | 2026-03-31 04:08:35.625916 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-31 04:08:35.625920 | orchestrator | Tuesday 31 March 2026 04:08:32 +0000 (0:00:00.152) 0:00:16.015 ********* 2026-03-31 04:08:35.625924 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:08:35.625931 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:08:35.625935 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:08:35.625939 | orchestrator | 2026-03-31 04:08:35.625943 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-03-31 04:08:35.625947 | orchestrator | Tuesday 31 March 2026 04:08:32 +0000 (0:00:00.365) 0:00:16.380 ********* 2026-03-31 04:08:35.625951 | orchestrator | changed: [testbed-node-3] 2026-03-31 04:08:35.625955 | orchestrator | changed: [testbed-node-4] 2026-03-31 04:08:35.625959 | orchestrator | changed: [testbed-node-5] 2026-03-31 04:08:47.088915 | orchestrator | 2026-03-31 04:08:47.089033 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-03-31 04:08:47.089052 | orchestrator | Tuesday 31 March 2026 04:08:35 +0000 (0:00:02.673) 0:00:19.054 ********* 2026-03-31 04:08:47.089065 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:08:47.089078 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:08:47.089089 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:08:47.089100 | orchestrator | 2026-03-31 04:08:47.089111 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-03-31 04:08:47.089123 | orchestrator | Tuesday 31 March 2026 04:08:35 +0000 (0:00:00.360) 0:00:19.414 ********* 2026-03-31 04:08:47.089133 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:08:47.089145 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:08:47.089155 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:08:47.089166 | orchestrator | 2026-03-31 04:08:47.089177 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-03-31 04:08:47.089188 | orchestrator | Tuesday 31 March 2026 04:08:36 +0000 (0:00:00.585) 0:00:20.000 ********* 2026-03-31 04:08:47.089199 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:08:47.089211 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:08:47.089222 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:08:47.089233 | orchestrator | 2026-03-31 04:08:47.089253 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-03-31 04:08:47.089269 | orchestrator | Tuesday 31 March 2026 04:08:36 +0000 (0:00:00.345) 0:00:20.345 ********* 2026-03-31 04:08:47.089318 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:08:47.089337 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:08:47.089355 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:08:47.089372 | orchestrator | 2026-03-31 04:08:47.089389 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-03-31 04:08:47.089468 | orchestrator | Tuesday 31 March 2026 04:08:37 +0000 (0:00:00.647) 0:00:20.993 ********* 2026-03-31 04:08:47.089486 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:08:47.089505 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:08:47.089550 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:08:47.089572 | orchestrator | 2026-03-31 04:08:47.089591 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-03-31 04:08:47.089609 | orchestrator | Tuesday 31 March 2026 04:08:37 +0000 (0:00:00.323) 0:00:21.317 ********* 2026-03-31 04:08:47.089622 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:08:47.089636 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:08:47.089650 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:08:47.089663 | orchestrator | 2026-03-31 04:08:47.089677 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-31 04:08:47.089690 | orchestrator | Tuesday 31 March 2026 04:08:38 +0000 (0:00:00.304) 0:00:21.621 ********* 2026-03-31 04:08:47.089704 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:08:47.089717 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:08:47.089730 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:08:47.089743 | orchestrator | 2026-03-31 04:08:47.089756 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-03-31 04:08:47.089768 | orchestrator | Tuesday 31 March 2026 04:08:38 +0000 (0:00:00.587) 0:00:22.209 ********* 2026-03-31 04:08:47.089782 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:08:47.089795 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:08:47.089808 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:08:47.089842 | orchestrator | 2026-03-31 04:08:47.089854 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-03-31 04:08:47.089864 | orchestrator | Tuesday 31 March 2026 04:08:39 +0000 (0:00:00.946) 0:00:23.156 ********* 2026-03-31 04:08:47.089875 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:08:47.089886 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:08:47.089897 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:08:47.089907 | orchestrator | 2026-03-31 04:08:47.089918 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-03-31 04:08:47.089929 | orchestrator | Tuesday 31 March 2026 04:08:40 +0000 (0:00:00.375) 0:00:23.531 ********* 2026-03-31 04:08:47.089940 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:08:47.089950 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:08:47.089961 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:08:47.089972 | orchestrator | 2026-03-31 04:08:47.089983 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-03-31 04:08:47.089993 | orchestrator | Tuesday 31 March 2026 04:08:40 +0000 (0:00:00.345) 0:00:23.877 ********* 2026-03-31 04:08:47.090004 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:08:47.090015 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:08:47.090086 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:08:47.090097 | orchestrator | 2026-03-31 04:08:47.090143 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-31 04:08:47.090156 | orchestrator | Tuesday 31 March 2026 04:08:41 +0000 (0:00:00.600) 0:00:24.477 ********* 2026-03-31 04:08:47.090167 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-31 04:08:47.090178 | orchestrator | 2026-03-31 04:08:47.090189 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-31 04:08:47.090200 | orchestrator | Tuesday 31 March 2026 04:08:41 +0000 (0:00:00.287) 0:00:24.764 ********* 2026-03-31 04:08:47.090211 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:08:47.090222 | orchestrator | 2026-03-31 04:08:47.090233 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-31 04:08:47.090244 | orchestrator | Tuesday 31 March 2026 04:08:41 +0000 (0:00:00.276) 0:00:25.041 ********* 2026-03-31 04:08:47.090255 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-31 04:08:47.090266 | orchestrator | 2026-03-31 04:08:47.090276 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-31 04:08:47.090288 | orchestrator | Tuesday 31 March 2026 04:08:43 +0000 (0:00:01.751) 0:00:26.793 ********* 2026-03-31 04:08:47.090298 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-31 04:08:47.090309 | orchestrator | 2026-03-31 04:08:47.090320 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-31 04:08:47.090331 | orchestrator | Tuesday 31 March 2026 04:08:43 +0000 (0:00:00.287) 0:00:27.081 ********* 2026-03-31 04:08:47.090342 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-31 04:08:47.090353 | orchestrator | 2026-03-31 04:08:47.090388 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-31 04:08:47.090452 | orchestrator | Tuesday 31 March 2026 04:08:43 +0000 (0:00:00.314) 0:00:27.395 ********* 2026-03-31 04:08:47.090472 | orchestrator | 2026-03-31 04:08:47.090490 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-31 04:08:47.090508 | orchestrator | Tuesday 31 March 2026 04:08:44 +0000 (0:00:00.074) 0:00:27.470 ********* 2026-03-31 04:08:47.090520 | orchestrator | 2026-03-31 04:08:47.090530 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-31 04:08:47.090541 | orchestrator | Tuesday 31 March 2026 04:08:44 +0000 (0:00:00.073) 0:00:27.544 ********* 2026-03-31 04:08:47.090552 | orchestrator | 2026-03-31 04:08:47.090563 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-31 04:08:47.090573 | orchestrator | Tuesday 31 March 2026 04:08:44 +0000 (0:00:00.075) 0:00:27.619 ********* 2026-03-31 04:08:47.090584 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-31 04:08:47.090608 | orchestrator | 2026-03-31 04:08:47.090619 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-31 04:08:47.090629 | orchestrator | Tuesday 31 March 2026 04:08:45 +0000 (0:00:01.792) 0:00:29.412 ********* 2026-03-31 04:08:47.090640 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-03-31 04:08:47.090651 | orchestrator |  "msg": [ 2026-03-31 04:08:47.090662 | orchestrator |  "Validator run completed.", 2026-03-31 04:08:47.090681 | orchestrator |  "You can find the report file here:", 2026-03-31 04:08:47.090692 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-03-31T04:08:17+00:00-report.json", 2026-03-31 04:08:47.090704 | orchestrator |  "on the following host:", 2026-03-31 04:08:47.090715 | orchestrator |  "testbed-manager" 2026-03-31 04:08:47.090732 | orchestrator |  ] 2026-03-31 04:08:47.090749 | orchestrator | } 2026-03-31 04:08:47.090766 | orchestrator | 2026-03-31 04:08:47.090783 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 04:08:47.090802 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-31 04:08:47.090822 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-31 04:08:47.090840 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-31 04:08:47.090859 | orchestrator | 2026-03-31 04:08:47.090878 | orchestrator | 2026-03-31 04:08:47.090895 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 04:08:47.090914 | orchestrator | Tuesday 31 March 2026 04:08:46 +0000 (0:00:00.707) 0:00:30.119 ********* 2026-03-31 04:08:47.090932 | orchestrator | =============================================================================== 2026-03-31 04:08:47.090951 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.67s 2026-03-31 04:08:47.090971 | orchestrator | Write report file ------------------------------------------------------- 1.79s 2026-03-31 04:08:47.090991 | orchestrator | Aggregate test results step one ----------------------------------------- 1.75s 2026-03-31 04:08:47.091011 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.75s 2026-03-31 04:08:47.091031 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 1.01s 2026-03-31 04:08:47.091051 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.95s 2026-03-31 04:08:47.091071 | orchestrator | Get timestamp for report file ------------------------------------------- 0.94s 2026-03-31 04:08:47.091090 | orchestrator | Aggregate test results step one ----------------------------------------- 0.91s 2026-03-31 04:08:47.091108 | orchestrator | Create report output directory ------------------------------------------ 0.86s 2026-03-31 04:08:47.091126 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.84s 2026-03-31 04:08:47.091145 | orchestrator | Print report file information ------------------------------------------- 0.71s 2026-03-31 04:08:47.091163 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.66s 2026-03-31 04:08:47.091181 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.65s 2026-03-31 04:08:47.091199 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.62s 2026-03-31 04:08:47.091216 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.60s 2026-03-31 04:08:47.091234 | orchestrator | Prepare test data ------------------------------------------------------- 0.59s 2026-03-31 04:08:47.091253 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.59s 2026-03-31 04:08:47.091270 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.56s 2026-03-31 04:08:47.091290 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.55s 2026-03-31 04:08:47.091325 | orchestrator | Get list of ceph-osd containers that are not running -------------------- 0.42s 2026-03-31 04:08:47.569630 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-03-31 04:08:47.579776 | orchestrator | + set -e 2026-03-31 04:08:47.579847 | orchestrator | + source /opt/manager-vars.sh 2026-03-31 04:08:47.579855 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-31 04:08:47.579864 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-31 04:08:47.579953 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-31 04:08:47.579961 | orchestrator | ++ CEPH_VERSION=reef 2026-03-31 04:08:47.580001 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-31 04:08:47.580010 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-31 04:08:47.580016 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-31 04:08:47.580022 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-31 04:08:47.580028 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-31 04:08:47.580034 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-31 04:08:47.580039 | orchestrator | ++ export ARA=false 2026-03-31 04:08:47.580045 | orchestrator | ++ ARA=false 2026-03-31 04:08:47.580051 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-31 04:08:47.580056 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-31 04:08:47.580062 | orchestrator | ++ export TEMPEST=false 2026-03-31 04:08:47.580067 | orchestrator | ++ TEMPEST=false 2026-03-31 04:08:47.580073 | orchestrator | ++ export IS_ZUUL=true 2026-03-31 04:08:47.580079 | orchestrator | ++ IS_ZUUL=true 2026-03-31 04:08:47.580084 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.240 2026-03-31 04:08:47.580090 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.240 2026-03-31 04:08:47.580095 | orchestrator | ++ export EXTERNAL_API=false 2026-03-31 04:08:47.580101 | orchestrator | ++ EXTERNAL_API=false 2026-03-31 04:08:47.580106 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-31 04:08:47.580112 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-31 04:08:47.580118 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-31 04:08:47.580123 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-31 04:08:47.580129 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-31 04:08:47.580134 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-31 04:08:47.580140 | orchestrator | + source /etc/os-release 2026-03-31 04:08:47.580145 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-03-31 04:08:47.580151 | orchestrator | ++ NAME=Ubuntu 2026-03-31 04:08:47.580165 | orchestrator | ++ VERSION_ID=24.04 2026-03-31 04:08:47.580171 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-03-31 04:08:47.580176 | orchestrator | ++ VERSION_CODENAME=noble 2026-03-31 04:08:47.580182 | orchestrator | ++ ID=ubuntu 2026-03-31 04:08:47.580187 | orchestrator | ++ ID_LIKE=debian 2026-03-31 04:08:47.580193 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-03-31 04:08:47.580198 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-03-31 04:08:47.580204 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-03-31 04:08:47.580209 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-03-31 04:08:47.580215 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-03-31 04:08:47.580221 | orchestrator | ++ LOGO=ubuntu-logo 2026-03-31 04:08:47.580226 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-03-31 04:08:47.580245 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-03-31 04:08:47.580252 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-03-31 04:08:47.611451 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-03-31 04:09:13.382227 | orchestrator | 2026-03-31 04:09:13.382447 | orchestrator | # Status of Elasticsearch 2026-03-31 04:09:13.382481 | orchestrator | 2026-03-31 04:09:13.382504 | orchestrator | + pushd /opt/configuration/contrib 2026-03-31 04:09:13.382525 | orchestrator | + echo 2026-03-31 04:09:13.382546 | orchestrator | + echo '# Status of Elasticsearch' 2026-03-31 04:09:13.382565 | orchestrator | + echo 2026-03-31 04:09:13.382585 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-03-31 04:09:13.596009 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-03-31 04:09:13.596124 | orchestrator | 2026-03-31 04:09:13.596140 | orchestrator | # Status of MariaDB 2026-03-31 04:09:13.596175 | orchestrator | + echo 2026-03-31 04:09:13.596186 | orchestrator | + echo '# Status of MariaDB' 2026-03-31 04:09:13.596196 | orchestrator | + echo 2026-03-31 04:09:13.596206 | orchestrator | 2026-03-31 04:09:13.596564 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-31 04:09:13.663223 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-31 04:09:13.663328 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-31 04:09:13.663350 | orchestrator | + MARIADB_USER=root_shard_0 2026-03-31 04:09:13.663457 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2026-03-31 04:09:13.734283 | orchestrator | Reading package lists... 2026-03-31 04:09:14.110759 | orchestrator | Building dependency tree... 2026-03-31 04:09:14.111063 | orchestrator | Reading state information... 2026-03-31 04:09:14.606007 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2026-03-31 04:09:14.606214 | orchestrator | bc set to manually installed. 2026-03-31 04:09:14.606233 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 2 not upgraded. 2026-03-31 04:09:15.391063 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2026-03-31 04:09:15.391354 | orchestrator | 2026-03-31 04:09:15.391458 | orchestrator | # Status of Prometheus 2026-03-31 04:09:15.391473 | orchestrator | 2026-03-31 04:09:15.391485 | orchestrator | + echo 2026-03-31 04:09:15.391496 | orchestrator | + echo '# Status of Prometheus' 2026-03-31 04:09:15.391507 | orchestrator | + echo 2026-03-31 04:09:15.391518 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-03-31 04:09:15.466242 | orchestrator | Unauthorized 2026-03-31 04:09:15.471031 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-03-31 04:09:15.539490 | orchestrator | Unauthorized 2026-03-31 04:09:15.543714 | orchestrator | 2026-03-31 04:09:15.543788 | orchestrator | # Status of RabbitMQ 2026-03-31 04:09:15.543803 | orchestrator | 2026-03-31 04:09:15.543815 | orchestrator | + echo 2026-03-31 04:09:15.543826 | orchestrator | + echo '# Status of RabbitMQ' 2026-03-31 04:09:15.543837 | orchestrator | + echo 2026-03-31 04:09:15.545100 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-31 04:09:15.611460 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-31 04:09:15.611559 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-31 04:09:15.611575 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2026-03-31 04:09:16.144779 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2026-03-31 04:09:16.161846 | orchestrator | 2026-03-31 04:09:16.161961 | orchestrator | # Status of Redis 2026-03-31 04:09:16.161981 | orchestrator | 2026-03-31 04:09:16.161993 | orchestrator | + echo 2026-03-31 04:09:16.162005 | orchestrator | + echo '# Status of Redis' 2026-03-31 04:09:16.162077 | orchestrator | + echo 2026-03-31 04:09:16.162094 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-03-31 04:09:16.167176 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.002101s;;;0.000000;10.000000 2026-03-31 04:09:16.167240 | orchestrator | 2026-03-31 04:09:16.167253 | orchestrator | # Create backup of MariaDB database 2026-03-31 04:09:16.167266 | orchestrator | 2026-03-31 04:09:16.167277 | orchestrator | + popd 2026-03-31 04:09:16.167288 | orchestrator | + echo 2026-03-31 04:09:16.167300 | orchestrator | + echo '# Create backup of MariaDB database' 2026-03-31 04:09:16.167311 | orchestrator | + echo 2026-03-31 04:09:16.167323 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-03-31 04:09:18.396694 | orchestrator | 2026-03-31 04:09:18 | INFO  | Task fe39516d-53ea-4b72-8147-700096cf4ce2 (mariadb_backup) was prepared for execution. 2026-03-31 04:09:18.396802 | orchestrator | 2026-03-31 04:09:18 | INFO  | It takes a moment until task fe39516d-53ea-4b72-8147-700096cf4ce2 (mariadb_backup) has been started and output is visible here. 2026-03-31 04:09:49.321962 | orchestrator | 2026-03-31 04:09:49.322146 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-31 04:09:49.322171 | orchestrator | 2026-03-31 04:09:49.322188 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-31 04:09:49.322205 | orchestrator | Tuesday 31 March 2026 04:09:23 +0000 (0:00:00.249) 0:00:00.249 ********* 2026-03-31 04:09:49.322220 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:09:49.322237 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:09:49.322265 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:09:49.322274 | orchestrator | 2026-03-31 04:09:49.322283 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-31 04:09:49.322292 | orchestrator | Tuesday 31 March 2026 04:09:23 +0000 (0:00:00.374) 0:00:00.623 ********* 2026-03-31 04:09:49.322301 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-31 04:09:49.322310 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-31 04:09:49.322413 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-31 04:09:49.322424 | orchestrator | 2026-03-31 04:09:49.322433 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-31 04:09:49.322442 | orchestrator | 2026-03-31 04:09:49.322451 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-31 04:09:49.322461 | orchestrator | Tuesday 31 March 2026 04:09:24 +0000 (0:00:00.703) 0:00:01.326 ********* 2026-03-31 04:09:49.322472 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-31 04:09:49.322483 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-31 04:09:49.322495 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-31 04:09:49.322511 | orchestrator | 2026-03-31 04:09:49.322524 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-31 04:09:49.322538 | orchestrator | Tuesday 31 March 2026 04:09:24 +0000 (0:00:00.491) 0:00:01.818 ********* 2026-03-31 04:09:49.322570 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 04:09:49.322587 | orchestrator | 2026-03-31 04:09:49.322602 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-03-31 04:09:49.322618 | orchestrator | Tuesday 31 March 2026 04:09:25 +0000 (0:00:00.668) 0:00:02.486 ********* 2026-03-31 04:09:49.322635 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:09:49.322650 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:09:49.322666 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:09:49.322680 | orchestrator | 2026-03-31 04:09:49.322695 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-03-31 04:09:49.322710 | orchestrator | Tuesday 31 March 2026 04:09:29 +0000 (0:00:03.612) 0:00:06.099 ********* 2026-03-31 04:09:49.322726 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-31 04:09:49.322743 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-03-31 04:09:49.322759 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-31 04:09:49.322772 | orchestrator | mariadb_bootstrap_restart 2026-03-31 04:09:49.322782 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:09:49.322793 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:09:49.322803 | orchestrator | changed: [testbed-node-0] 2026-03-31 04:09:49.322813 | orchestrator | 2026-03-31 04:09:49.322823 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-31 04:09:49.322834 | orchestrator | skipping: no hosts matched 2026-03-31 04:09:49.322844 | orchestrator | 2026-03-31 04:09:49.322855 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-31 04:09:49.322865 | orchestrator | skipping: no hosts matched 2026-03-31 04:09:49.322875 | orchestrator | 2026-03-31 04:09:49.322884 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-31 04:09:49.322892 | orchestrator | skipping: no hosts matched 2026-03-31 04:09:49.322901 | orchestrator | 2026-03-31 04:09:49.322909 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-31 04:09:49.322918 | orchestrator | 2026-03-31 04:09:49.322926 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-31 04:09:49.322935 | orchestrator | Tuesday 31 March 2026 04:09:47 +0000 (0:00:18.569) 0:00:24.668 ********* 2026-03-31 04:09:49.322944 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:09:49.322952 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:09:49.322961 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:09:49.322979 | orchestrator | 2026-03-31 04:09:49.322988 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-31 04:09:49.322996 | orchestrator | Tuesday 31 March 2026 04:09:48 +0000 (0:00:00.378) 0:00:25.046 ********* 2026-03-31 04:09:49.323011 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:09:49.323025 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:09:49.323037 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:09:49.323049 | orchestrator | 2026-03-31 04:09:49.323062 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 04:09:49.323077 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 04:09:49.323094 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-31 04:09:49.323110 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-31 04:09:49.323124 | orchestrator | 2026-03-31 04:09:49.323137 | orchestrator | 2026-03-31 04:09:49.323146 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 04:09:49.323155 | orchestrator | Tuesday 31 March 2026 04:09:48 +0000 (0:00:00.616) 0:00:25.663 ********* 2026-03-31 04:09:49.323163 | orchestrator | =============================================================================== 2026-03-31 04:09:49.323172 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 18.57s 2026-03-31 04:09:49.323199 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.61s 2026-03-31 04:09:49.323209 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.70s 2026-03-31 04:09:49.323218 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.67s 2026-03-31 04:09:49.323226 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.62s 2026-03-31 04:09:49.323235 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.49s 2026-03-31 04:09:49.323243 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.38s 2026-03-31 04:09:49.323252 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.37s 2026-03-31 04:09:49.875536 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-03-31 04:09:49.887040 | orchestrator | + set -e 2026-03-31 04:09:49.887133 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-31 04:09:49.888250 | orchestrator | ++ export INTERACTIVE=false 2026-03-31 04:09:49.888369 | orchestrator | ++ INTERACTIVE=false 2026-03-31 04:09:49.888381 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-31 04:09:49.888389 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-31 04:09:49.888397 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-31 04:09:49.890599 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-31 04:09:49.897780 | orchestrator | 2026-03-31 04:09:49.897858 | orchestrator | # OpenStack endpoints 2026-03-31 04:09:49.897872 | orchestrator | 2026-03-31 04:09:49.897883 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-31 04:09:49.897894 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-31 04:09:49.897906 | orchestrator | + export OS_CLOUD=admin 2026-03-31 04:09:49.897917 | orchestrator | + OS_CLOUD=admin 2026-03-31 04:09:49.897927 | orchestrator | + echo 2026-03-31 04:09:49.897938 | orchestrator | + echo '# OpenStack endpoints' 2026-03-31 04:09:49.897949 | orchestrator | + echo 2026-03-31 04:09:49.897960 | orchestrator | + openstack endpoint list 2026-03-31 04:09:53.380855 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-31 04:09:53.380962 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-03-31 04:09:53.380978 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-31 04:09:53.381014 | orchestrator | | 0188c06a02af43339803e95b8c47a93c | RegionOne | skyline | panel | True | public | https://api.testbed.osism.xyz:9998 | 2026-03-31 04:09:53.381027 | orchestrator | | 2b09340e0b8544e68d1685fa5112c562 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-03-31 04:09:53.381038 | orchestrator | | 479e2b307c0d49ca955885a4a52f7b41 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-03-31 04:09:53.381049 | orchestrator | | 4e6e4dd753fa41b5844c06b24f23fccc | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-03-31 04:09:53.381060 | orchestrator | | 5c4021d80f7c4527ac18e922537f30e5 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-03-31 04:09:53.381071 | orchestrator | | 5c5412b83739414ca5fefc04aa4da795 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-03-31 04:09:53.381083 | orchestrator | | 656cddeb6edf45149f70075a23aaa7b0 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-03-31 04:09:53.381095 | orchestrator | | 703b5008e7ef40a385eb04cf4004fad1 | RegionOne | aodh | alarming | True | public | https://api.testbed.osism.xyz:8042 | 2026-03-31 04:09:53.381106 | orchestrator | | 7552105ca7c74cc2ac64855b6d2c2a29 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-03-31 04:09:53.381117 | orchestrator | | 7808cab0ee264bb2a5d77cc30fb1130e | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-03-31 04:09:53.381144 | orchestrator | | 7abb9a22aac148ec9d3cc06c4ed6f573 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-03-31 04:09:53.381156 | orchestrator | | 7c3557aa252045fc98854950ccf10957 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-03-31 04:09:53.381167 | orchestrator | | 8064718aad464b9383b10df7a365a1a7 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-03-31 04:09:53.381178 | orchestrator | | 8178be254e5b4867a58ab660b6a37ca2 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-03-31 04:09:53.381189 | orchestrator | | 839447da6c6a41e884567e51fe85fc06 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-03-31 04:09:53.381200 | orchestrator | | 855b0427bca54eeb98bf226e68b5ae84 | RegionOne | aodh | alarming | True | internal | https://api-int.testbed.osism.xyz:8042 | 2026-03-31 04:09:53.381211 | orchestrator | | 8813e8290ad74eb9b248db4bcb73438e | RegionOne | manila | share | True | public | https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-03-31 04:09:53.381222 | orchestrator | | 886c3ad9db8f429aa99a8db7c803536f | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-03-31 04:09:53.381233 | orchestrator | | 959560f3fc4e4807942bc2f11be83d02 | RegionOne | manilav2 | sharev2 | True | internal | https://api-int.testbed.osism.xyz:8786/v2 | 2026-03-31 04:09:53.381244 | orchestrator | | 96df72852f0348678e758a8f293a366a | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-03-31 04:09:53.381279 | orchestrator | | 970754c76b6f47f494f4ae5cad9ba074 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-03-31 04:09:53.381296 | orchestrator | | 9a226ba2e3b244879e659deacfe1a470 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-03-31 04:09:53.381308 | orchestrator | | c931b8ab19834bb5a5edcbd2084a0047 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-03-31 04:09:53.381376 | orchestrator | | cdfd4a40b1c8450188114e4a55c42362 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-03-31 04:09:53.381388 | orchestrator | | d3915374ee3f4561a8ef20d2cdeec6d2 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-03-31 04:09:53.381403 | orchestrator | | d5787b55d6f4470bb80c41ea3642daf1 | RegionOne | manila | share | True | internal | https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-03-31 04:09:53.381415 | orchestrator | | d912f326c1dc4d478d94d3d2be388717 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-03-31 04:09:53.381428 | orchestrator | | db48efd8d8b148829fd56d939aec29c6 | RegionOne | skyline | panel | True | internal | https://api-int.testbed.osism.xyz:9998 | 2026-03-31 04:09:53.381441 | orchestrator | | dcfb819b337f4e55aa12a5481ffd65e3 | RegionOne | manilav2 | sharev2 | True | public | https://api.testbed.osism.xyz:8786/v2 | 2026-03-31 04:09:53.381453 | orchestrator | | e98d5538510547d0aa35b5523ff68fdb | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-03-31 04:09:53.381466 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-31 04:09:53.671794 | orchestrator | 2026-03-31 04:09:53.671875 | orchestrator | # Cinder 2026-03-31 04:09:53.671884 | orchestrator | 2026-03-31 04:09:53.671891 | orchestrator | + echo 2026-03-31 04:09:53.671897 | orchestrator | + echo '# Cinder' 2026-03-31 04:09:53.671904 | orchestrator | + echo 2026-03-31 04:09:53.671910 | orchestrator | + openstack volume service list 2026-03-31 04:09:56.662760 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-31 04:09:56.662887 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-03-31 04:09:56.662903 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-31 04:09:56.662914 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-03-31T04:09:54.000000 | 2026-03-31 04:09:56.662924 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-03-31T04:09:54.000000 | 2026-03-31 04:09:56.662934 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-03-31T04:09:54.000000 | 2026-03-31 04:09:56.662944 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-03-31T04:09:54.000000 | 2026-03-31 04:09:56.662967 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-03-31T04:09:48.000000 | 2026-03-31 04:09:56.663802 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-03-31T04:09:51.000000 | 2026-03-31 04:09:56.663833 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-03-31T04:09:48.000000 | 2026-03-31 04:09:56.663851 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-03-31T04:09:51.000000 | 2026-03-31 04:09:56.663868 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-03-31T04:09:51.000000 | 2026-03-31 04:09:56.663914 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-31 04:09:56.980716 | orchestrator | 2026-03-31 04:09:56.980842 | orchestrator | # Neutron 2026-03-31 04:09:56.980866 | orchestrator | 2026-03-31 04:09:56.980879 | orchestrator | + echo 2026-03-31 04:09:56.980890 | orchestrator | + echo '# Neutron' 2026-03-31 04:09:56.980902 | orchestrator | + echo 2026-03-31 04:09:56.980913 | orchestrator | + openstack network agent list 2026-03-31 04:09:59.711203 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-31 04:09:59.711299 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-03-31 04:09:59.711354 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-31 04:09:59.711365 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-03-31 04:09:59.711375 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-03-31 04:09:59.711385 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-03-31 04:09:59.711412 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-03-31 04:09:59.711422 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-03-31 04:09:59.711432 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-03-31 04:09:59.711441 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-31 04:09:59.711451 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-31 04:09:59.711460 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-31 04:09:59.711470 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-31 04:10:00.029354 | orchestrator | + openstack network service provider list 2026-03-31 04:10:02.749916 | orchestrator | +---------------+------+---------+ 2026-03-31 04:10:02.750012 | orchestrator | | Service Type | Name | Default | 2026-03-31 04:10:02.750057 | orchestrator | +---------------+------+---------+ 2026-03-31 04:10:02.750065 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-03-31 04:10:02.750072 | orchestrator | +---------------+------+---------+ 2026-03-31 04:10:03.229988 | orchestrator | 2026-03-31 04:10:03.230136 | orchestrator | # Nova 2026-03-31 04:10:03.230150 | orchestrator | 2026-03-31 04:10:03.230161 | orchestrator | + echo 2026-03-31 04:10:03.230170 | orchestrator | + echo '# Nova' 2026-03-31 04:10:03.230180 | orchestrator | + echo 2026-03-31 04:10:03.230191 | orchestrator | + openstack compute service list 2026-03-31 04:10:06.047680 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-31 04:10:06.047786 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-03-31 04:10:06.047802 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-31 04:10:06.047814 | orchestrator | | fbe16e61-5502-4e36-a403-4776a72ba955 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-03-31T04:10:00.000000 | 2026-03-31 04:10:06.047852 | orchestrator | | afcdcf92-265b-45a9-8cd1-43b458f9f25a | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-03-31T04:10:04.000000 | 2026-03-31 04:10:06.047864 | orchestrator | | 7714db8c-b3da-4853-836a-e6458ef5c56c | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-03-31T04:09:57.000000 | 2026-03-31 04:10:06.047877 | orchestrator | | dc722416-7157-4edc-8be5-102a7594bd51 | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-03-31T04:10:02.000000 | 2026-03-31 04:10:06.047888 | orchestrator | | 76762c1f-bc7d-4e6c-a852-348907fbbbe8 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-03-31T04:10:03.000000 | 2026-03-31 04:10:06.047900 | orchestrator | | 2ebda629-cb81-4149-a9bf-1db0ba6a5a75 | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-03-31T04:10:04.000000 | 2026-03-31 04:10:06.047911 | orchestrator | | 9c013f78-e8dd-4617-8de3-a40f2381e325 | nova-compute | testbed-node-4 | nova | enabled | up | 2026-03-31T04:10:02.000000 | 2026-03-31 04:10:06.047922 | orchestrator | | 1fb0d6e3-470e-4044-9df8-22bb8aa128a0 | nova-compute | testbed-node-5 | nova | enabled | up | 2026-03-31T04:10:02.000000 | 2026-03-31 04:10:06.047934 | orchestrator | | de78e688-32c8-4aba-984f-be9b8090e72b | nova-compute | testbed-node-3 | nova | enabled | up | 2026-03-31T04:10:03.000000 | 2026-03-31 04:10:06.047945 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-31 04:10:06.547563 | orchestrator | + openstack hypervisor list 2026-03-31 04:10:09.516050 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-31 04:10:09.516146 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-03-31 04:10:09.516157 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-31 04:10:09.516167 | orchestrator | | 74d47208-bf43-459c-a75b-40d5ab302ca5 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-03-31 04:10:09.516175 | orchestrator | | a7c69948-d67f-4612-8d16-0455b32a4594 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-03-31 04:10:09.516183 | orchestrator | | f385ad80-3a7b-4487-b583-7f51ede7e9e2 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-03-31 04:10:09.516191 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-31 04:10:09.990091 | orchestrator | 2026-03-31 04:10:09.990184 | orchestrator | # Run OpenStack test play 2026-03-31 04:10:09.990200 | orchestrator | 2026-03-31 04:10:09.990211 | orchestrator | + echo 2026-03-31 04:10:09.990223 | orchestrator | + echo '# Run OpenStack test play' 2026-03-31 04:10:09.990238 | orchestrator | + echo 2026-03-31 04:10:09.990249 | orchestrator | + osism apply --environment openstack test 2026-03-31 04:10:12.376069 | orchestrator | 2026-03-31 04:10:12 | INFO  | Trying to run play test in environment openstack 2026-03-31 04:10:22.483896 | orchestrator | 2026-03-31 04:10:22 | INFO  | Task f37fce66-8f10-4320-bcc4-94731b862be8 (test) was prepared for execution. 2026-03-31 04:10:22.483986 | orchestrator | 2026-03-31 04:10:22 | INFO  | It takes a moment until task f37fce66-8f10-4320-bcc4-94731b862be8 (test) has been started and output is visible here. 2026-03-31 04:13:39.570761 | orchestrator | 2026-03-31 04:13:39.570909 | orchestrator | PLAY [Create test project] ***************************************************** 2026-03-31 04:13:39.570939 | orchestrator | 2026-03-31 04:13:39.570960 | orchestrator | TASK [Create test domain] ****************************************************** 2026-03-31 04:13:39.570979 | orchestrator | Tuesday 31 March 2026 04:10:26 +0000 (0:00:00.074) 0:00:00.074 ********* 2026-03-31 04:13:39.571000 | orchestrator | changed: [localhost] 2026-03-31 04:13:39.571019 | orchestrator | 2026-03-31 04:13:39.571039 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-03-31 04:13:39.571059 | orchestrator | Tuesday 31 March 2026 04:10:30 +0000 (0:00:03.723) 0:00:03.798 ********* 2026-03-31 04:13:39.571216 | orchestrator | changed: [localhost] 2026-03-31 04:13:39.571243 | orchestrator | 2026-03-31 04:13:39.571262 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-03-31 04:13:39.571281 | orchestrator | Tuesday 31 March 2026 04:10:34 +0000 (0:00:04.194) 0:00:07.993 ********* 2026-03-31 04:13:39.571299 | orchestrator | changed: [localhost] 2026-03-31 04:13:39.571318 | orchestrator | 2026-03-31 04:13:39.571331 | orchestrator | TASK [Create test project] ***************************************************** 2026-03-31 04:13:39.571343 | orchestrator | Tuesday 31 March 2026 04:10:41 +0000 (0:00:06.742) 0:00:14.735 ********* 2026-03-31 04:13:39.571355 | orchestrator | changed: [localhost] 2026-03-31 04:13:39.571367 | orchestrator | 2026-03-31 04:13:39.571379 | orchestrator | TASK [Create test user] ******************************************************** 2026-03-31 04:13:39.571392 | orchestrator | Tuesday 31 March 2026 04:10:45 +0000 (0:00:04.148) 0:00:18.883 ********* 2026-03-31 04:13:39.571405 | orchestrator | changed: [localhost] 2026-03-31 04:13:39.571417 | orchestrator | 2026-03-31 04:13:39.571429 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-03-31 04:13:39.571442 | orchestrator | Tuesday 31 March 2026 04:10:50 +0000 (0:00:04.466) 0:00:23.349 ********* 2026-03-31 04:13:39.571455 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-03-31 04:13:39.571468 | orchestrator | changed: [localhost] => (item=member) 2026-03-31 04:13:39.571482 | orchestrator | changed: [localhost] => (item=creator) 2026-03-31 04:13:39.571495 | orchestrator | 2026-03-31 04:13:39.571507 | orchestrator | TASK [Create test server group] ************************************************ 2026-03-31 04:13:39.571519 | orchestrator | Tuesday 31 March 2026 04:11:01 +0000 (0:00:11.727) 0:00:35.077 ********* 2026-03-31 04:13:39.571532 | orchestrator | changed: [localhost] 2026-03-31 04:13:39.571544 | orchestrator | 2026-03-31 04:13:39.571556 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-03-31 04:13:39.571568 | orchestrator | Tuesday 31 March 2026 04:11:06 +0000 (0:00:04.831) 0:00:39.908 ********* 2026-03-31 04:13:39.571580 | orchestrator | changed: [localhost] 2026-03-31 04:13:39.571592 | orchestrator | 2026-03-31 04:13:39.571604 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-03-31 04:13:39.571617 | orchestrator | Tuesday 31 March 2026 04:11:11 +0000 (0:00:04.699) 0:00:44.608 ********* 2026-03-31 04:13:39.571629 | orchestrator | changed: [localhost] 2026-03-31 04:13:39.571641 | orchestrator | 2026-03-31 04:13:39.571653 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-03-31 04:13:39.571664 | orchestrator | Tuesday 31 March 2026 04:11:15 +0000 (0:00:04.368) 0:00:48.976 ********* 2026-03-31 04:13:39.571674 | orchestrator | changed: [localhost] 2026-03-31 04:13:39.571685 | orchestrator | 2026-03-31 04:13:39.571696 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-03-31 04:13:39.571706 | orchestrator | Tuesday 31 March 2026 04:11:19 +0000 (0:00:03.967) 0:00:52.944 ********* 2026-03-31 04:13:39.571717 | orchestrator | changed: [localhost] 2026-03-31 04:13:39.571727 | orchestrator | 2026-03-31 04:13:39.571738 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-03-31 04:13:39.571748 | orchestrator | Tuesday 31 March 2026 04:11:24 +0000 (0:00:04.347) 0:00:57.291 ********* 2026-03-31 04:13:39.571759 | orchestrator | changed: [localhost] 2026-03-31 04:13:39.571770 | orchestrator | 2026-03-31 04:13:39.571780 | orchestrator | TASK [Create test networks] **************************************************** 2026-03-31 04:13:39.571791 | orchestrator | Tuesday 31 March 2026 04:11:28 +0000 (0:00:04.076) 0:01:01.367 ********* 2026-03-31 04:13:39.571801 | orchestrator | changed: [localhost] => (item={'name': 'test-1'}) 2026-03-31 04:13:39.571812 | orchestrator | changed: [localhost] => (item={'name': 'test-2'}) 2026-03-31 04:13:39.571823 | orchestrator | changed: [localhost] => (item={'name': 'test-3'}) 2026-03-31 04:13:39.571834 | orchestrator | 2026-03-31 04:13:39.571845 | orchestrator | TASK [Create test subnets] ***************************************************** 2026-03-31 04:13:39.571856 | orchestrator | Tuesday 31 March 2026 04:11:41 +0000 (0:00:13.807) 0:01:15.175 ********* 2026-03-31 04:13:39.571877 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'subnet': 'subnet-test-1', 'cidr': '192.168.200.0/24'}) 2026-03-31 04:13:39.571888 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'subnet': 'subnet-test-2', 'cidr': '192.168.201.0/24'}) 2026-03-31 04:13:39.571899 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'subnet': 'subnet-test-3', 'cidr': '192.168.202.0/24'}) 2026-03-31 04:13:39.571910 | orchestrator | 2026-03-31 04:13:39.571921 | orchestrator | TASK [Create test routers] ***************************************************** 2026-03-31 04:13:39.571932 | orchestrator | Tuesday 31 March 2026 04:11:57 +0000 (0:00:15.109) 0:01:30.285 ********* 2026-03-31 04:13:39.571943 | orchestrator | changed: [localhost] => (item={'router': 'router-test-1', 'subnet': 'subnet-test-1'}) 2026-03-31 04:13:39.571954 | orchestrator | changed: [localhost] => (item={'router': 'router-test-2', 'subnet': 'subnet-test-2'}) 2026-03-31 04:13:39.571980 | orchestrator | changed: [localhost] => (item={'router': 'router-test-3', 'subnet': 'subnet-test-3'}) 2026-03-31 04:13:39.571991 | orchestrator | 2026-03-31 04:13:39.572002 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-03-31 04:13:39.572012 | orchestrator | 2026-03-31 04:13:39.572023 | orchestrator | TASK [Get test server group] *************************************************** 2026-03-31 04:13:39.572054 | orchestrator | Tuesday 31 March 2026 04:12:27 +0000 (0:00:29.969) 0:02:00.254 ********* 2026-03-31 04:13:39.572066 | orchestrator | ok: [localhost] 2026-03-31 04:13:39.572106 | orchestrator | 2026-03-31 04:13:39.572119 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-03-31 04:13:39.572129 | orchestrator | Tuesday 31 March 2026 04:12:30 +0000 (0:00:03.695) 0:02:03.950 ********* 2026-03-31 04:13:39.572140 | orchestrator | skipping: [localhost] 2026-03-31 04:13:39.572151 | orchestrator | 2026-03-31 04:13:39.572161 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-03-31 04:13:39.572172 | orchestrator | Tuesday 31 March 2026 04:12:30 +0000 (0:00:00.057) 0:02:04.008 ********* 2026-03-31 04:13:39.572182 | orchestrator | skipping: [localhost] 2026-03-31 04:13:39.572193 | orchestrator | 2026-03-31 04:13:39.572203 | orchestrator | TASK [Delete test instances] *************************************************** 2026-03-31 04:13:39.572214 | orchestrator | Tuesday 31 March 2026 04:12:30 +0000 (0:00:00.058) 0:02:04.066 ********* 2026-03-31 04:13:39.572224 | orchestrator | skipping: [localhost] => (item={'name': 'test-4', 'network': 'test-3'})  2026-03-31 04:13:39.572235 | orchestrator | skipping: [localhost] => (item={'name': 'test-3', 'network': 'test-2'})  2026-03-31 04:13:39.572246 | orchestrator | skipping: [localhost] => (item={'name': 'test-2', 'network': 'test-2'})  2026-03-31 04:13:39.572260 | orchestrator | skipping: [localhost] => (item={'name': 'test-1', 'network': 'test-1'})  2026-03-31 04:13:39.572278 | orchestrator | skipping: [localhost] => (item={'name': 'test', 'network': 'test-1'})  2026-03-31 04:13:39.572297 | orchestrator | skipping: [localhost] 2026-03-31 04:13:39.572315 | orchestrator | 2026-03-31 04:13:39.572332 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-03-31 04:13:39.572350 | orchestrator | Tuesday 31 March 2026 04:12:31 +0000 (0:00:00.187) 0:02:04.254 ********* 2026-03-31 04:13:39.572367 | orchestrator | skipping: [localhost] 2026-03-31 04:13:39.572384 | orchestrator | 2026-03-31 04:13:39.572401 | orchestrator | TASK [Create test instances] *************************************************** 2026-03-31 04:13:39.572418 | orchestrator | Tuesday 31 March 2026 04:12:31 +0000 (0:00:00.173) 0:02:04.427 ********* 2026-03-31 04:13:39.572434 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-03-31 04:13:39.572451 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-03-31 04:13:39.572468 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-03-31 04:13:39.572486 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-03-31 04:13:39.572503 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-03-31 04:13:39.572534 | orchestrator | 2026-03-31 04:13:39.572552 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-03-31 04:13:39.572570 | orchestrator | Tuesday 31 March 2026 04:12:36 +0000 (0:00:05.166) 0:02:09.594 ********* 2026-03-31 04:13:39.572587 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-03-31 04:13:39.572606 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-03-31 04:13:39.572624 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-03-31 04:13:39.572643 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-03-31 04:13:39.572665 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j274907591067.3755', 'results_file': '/ansible/.ansible_async/j274907591067.3755', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-03-31 04:13:39.572685 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-03-31 04:13:39.572702 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j193393728806.3780', 'results_file': '/ansible/.ansible_async/j193393728806.3780', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-03-31 04:13:39.572722 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j374389221870.3805', 'results_file': '/ansible/.ansible_async/j374389221870.3805', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-03-31 04:13:39.572740 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j902409398441.3830', 'results_file': '/ansible/.ansible_async/j902409398441.3830', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-03-31 04:13:39.572760 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j516217456429.3855', 'results_file': '/ansible/.ansible_async/j516217456429.3855', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-03-31 04:13:39.572779 | orchestrator | 2026-03-31 04:13:39.572797 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-03-31 04:13:39.572815 | orchestrator | Tuesday 31 March 2026 04:13:34 +0000 (0:00:58.230) 0:03:07.825 ********* 2026-03-31 04:13:39.572841 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-03-31 04:14:50.689653 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-03-31 04:14:50.689748 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-03-31 04:14:50.689758 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-03-31 04:14:50.689766 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-03-31 04:14:50.689773 | orchestrator | 2026-03-31 04:14:50.689781 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-03-31 04:14:50.689788 | orchestrator | Tuesday 31 March 2026 04:13:39 +0000 (0:00:04.930) 0:03:12.755 ********* 2026-03-31 04:14:50.689795 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-03-31 04:14:50.689805 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j942411382268.3966', 'results_file': '/ansible/.ansible_async/j942411382268.3966', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-03-31 04:14:50.689814 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j770721485938.3991', 'results_file': '/ansible/.ansible_async/j770721485938.3991', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-03-31 04:14:50.689838 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j842737315900.4016', 'results_file': '/ansible/.ansible_async/j842737315900.4016', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-03-31 04:14:50.689845 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j740865896157.4041', 'results_file': '/ansible/.ansible_async/j740865896157.4041', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-03-31 04:14:50.689866 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j331965368920.4066', 'results_file': '/ansible/.ansible_async/j331965368920.4066', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-03-31 04:14:50.689874 | orchestrator | 2026-03-31 04:14:50.689881 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-03-31 04:14:50.689888 | orchestrator | Tuesday 31 March 2026 04:13:49 +0000 (0:00:09.884) 0:03:22.640 ********* 2026-03-31 04:14:50.689894 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-03-31 04:14:50.689901 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-03-31 04:14:50.689907 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-03-31 04:14:50.689914 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-03-31 04:14:50.689921 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-03-31 04:14:50.689928 | orchestrator | 2026-03-31 04:14:50.689934 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-03-31 04:14:50.689941 | orchestrator | Tuesday 31 March 2026 04:13:54 +0000 (0:00:05.052) 0:03:27.692 ********* 2026-03-31 04:14:50.689948 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-03-31 04:14:50.689955 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j509918369399.4135', 'results_file': '/ansible/.ansible_async/j509918369399.4135', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-03-31 04:14:50.689962 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j405355231705.4160', 'results_file': '/ansible/.ansible_async/j405355231705.4160', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-03-31 04:14:50.689969 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j584629338362.4186', 'results_file': '/ansible/.ansible_async/j584629338362.4186', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-03-31 04:14:50.689980 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j454714721678.4219', 'results_file': '/ansible/.ansible_async/j454714721678.4219', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-03-31 04:14:50.689998 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j104479572322.4245', 'results_file': '/ansible/.ansible_async/j104479572322.4245', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-03-31 04:14:50.690005 | orchestrator | 2026-03-31 04:14:50.690096 | orchestrator | TASK [Create test volume] ****************************************************** 2026-03-31 04:14:50.690104 | orchestrator | Tuesday 31 March 2026 04:14:05 +0000 (0:00:10.536) 0:03:38.228 ********* 2026-03-31 04:14:50.690118 | orchestrator | changed: [localhost] 2026-03-31 04:14:50.690126 | orchestrator | 2026-03-31 04:14:50.690132 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-03-31 04:14:50.690139 | orchestrator | Tuesday 31 March 2026 04:14:11 +0000 (0:00:06.464) 0:03:44.693 ********* 2026-03-31 04:14:50.690146 | orchestrator | changed: [localhost] 2026-03-31 04:14:50.690152 | orchestrator | 2026-03-31 04:14:50.690159 | orchestrator | TASK [Create floating ip addresses] ******************************************** 2026-03-31 04:14:50.690166 | orchestrator | Tuesday 31 March 2026 04:14:24 +0000 (0:00:13.443) 0:03:58.136 ********* 2026-03-31 04:14:50.690173 | orchestrator | ok: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-03-31 04:14:50.690180 | orchestrator | ok: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-03-31 04:14:50.690187 | orchestrator | ok: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-03-31 04:14:50.690193 | orchestrator | ok: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-03-31 04:14:50.690200 | orchestrator | ok: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-03-31 04:14:50.690206 | orchestrator | 2026-03-31 04:14:50.690215 | orchestrator | TASK [Print floating ip addresses] ********************************************* 2026-03-31 04:14:50.690223 | orchestrator | Tuesday 31 March 2026 04:14:50 +0000 (0:00:25.271) 0:04:23.407 ********* 2026-03-31 04:14:50.690230 | orchestrator | ok: [localhost] => (item=test) => { 2026-03-31 04:14:50.690238 | orchestrator |  "msg": "test: 192.168.112.169" 2026-03-31 04:14:50.690246 | orchestrator | } 2026-03-31 04:14:50.690254 | orchestrator | ok: [localhost] => (item=test-1) => { 2026-03-31 04:14:50.690262 | orchestrator |  "msg": "test-1: 192.168.112.173" 2026-03-31 04:14:50.690270 | orchestrator | } 2026-03-31 04:14:50.690278 | orchestrator | ok: [localhost] => (item=test-2) => { 2026-03-31 04:14:50.690285 | orchestrator |  "msg": "test-2: 192.168.112.152" 2026-03-31 04:14:50.690292 | orchestrator | } 2026-03-31 04:14:50.690300 | orchestrator | ok: [localhost] => (item=test-3) => { 2026-03-31 04:14:50.690308 | orchestrator |  "msg": "test-3: 192.168.112.178" 2026-03-31 04:14:50.690315 | orchestrator | } 2026-03-31 04:14:50.690322 | orchestrator | ok: [localhost] => (item=test-4) => { 2026-03-31 04:14:50.690330 | orchestrator |  "msg": "test-4: 192.168.112.108" 2026-03-31 04:14:50.690338 | orchestrator | } 2026-03-31 04:14:50.690345 | orchestrator | 2026-03-31 04:14:50.690358 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 04:14:50.690370 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-31 04:14:50.690384 | orchestrator | 2026-03-31 04:14:50.690402 | orchestrator | 2026-03-31 04:14:50.690424 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 04:14:50.690435 | orchestrator | Tuesday 31 March 2026 04:14:50 +0000 (0:00:00.128) 0:04:23.536 ********* 2026-03-31 04:14:50.690447 | orchestrator | =============================================================================== 2026-03-31 04:14:50.690457 | orchestrator | Wait for instance creation to complete --------------------------------- 58.23s 2026-03-31 04:14:50.690468 | orchestrator | Create test routers ---------------------------------------------------- 29.97s 2026-03-31 04:14:50.690479 | orchestrator | Create floating ip addresses ------------------------------------------- 25.27s 2026-03-31 04:14:50.690489 | orchestrator | Create test subnets ---------------------------------------------------- 15.11s 2026-03-31 04:14:50.690500 | orchestrator | Create test networks --------------------------------------------------- 13.81s 2026-03-31 04:14:50.690511 | orchestrator | Attach test volume ----------------------------------------------------- 13.44s 2026-03-31 04:14:50.690521 | orchestrator | Add member roles to user test ------------------------------------------ 11.73s 2026-03-31 04:14:50.690532 | orchestrator | Wait for tags to be added ---------------------------------------------- 10.54s 2026-03-31 04:14:50.690543 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.88s 2026-03-31 04:14:50.690563 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.74s 2026-03-31 04:14:50.690575 | orchestrator | Create test volume ------------------------------------------------------ 6.46s 2026-03-31 04:14:50.690586 | orchestrator | Create test instances --------------------------------------------------- 5.17s 2026-03-31 04:14:50.690596 | orchestrator | Add tag to instances ---------------------------------------------------- 5.05s 2026-03-31 04:14:50.690606 | orchestrator | Add metadata to instances ----------------------------------------------- 4.93s 2026-03-31 04:14:50.690617 | orchestrator | Create test server group ------------------------------------------------ 4.83s 2026-03-31 04:14:50.690627 | orchestrator | Create ssh security group ----------------------------------------------- 4.70s 2026-03-31 04:14:50.690637 | orchestrator | Create test user -------------------------------------------------------- 4.47s 2026-03-31 04:14:50.690648 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.37s 2026-03-31 04:14:50.690659 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.35s 2026-03-31 04:14:50.690676 | orchestrator | Create test-admin user -------------------------------------------------- 4.19s 2026-03-31 04:14:51.093805 | orchestrator | + server_list 2026-03-31 04:14:51.093911 | orchestrator | + openstack --os-cloud test server list 2026-03-31 04:14:54.818308 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-03-31 04:14:54.818416 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-03-31 04:14:54.818433 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-03-31 04:14:54.818445 | orchestrator | | e004de17-c8a7-44d3-adfc-0ea44942548f | test-3 | ACTIVE | test-2=192.168.112.178, 192.168.201.252 | N/A (booted from volume) | SCS-1L-1 | 2026-03-31 04:14:54.818454 | orchestrator | | e8bfbb64-a496-4aa5-bc9b-e254a6b37bf6 | test-4 | ACTIVE | test-3=192.168.112.108, 192.168.202.149 | N/A (booted from volume) | SCS-1L-1 | 2026-03-31 04:14:54.818465 | orchestrator | | e438d2ac-6fc3-45bf-9645-b144c88218c1 | test-1 | ACTIVE | test-1=192.168.112.173, 192.168.200.150 | N/A (booted from volume) | SCS-1L-1 | 2026-03-31 04:14:54.818477 | orchestrator | | 308f9ffb-fc2b-43c2-b873-e59940a22370 | test | ACTIVE | test-1=192.168.112.169, 192.168.200.145 | N/A (booted from volume) | SCS-1L-1 | 2026-03-31 04:14:54.818489 | orchestrator | | 4819050c-89e5-4d5f-80e6-f03e7e044fd1 | test-2 | ACTIVE | test-2=192.168.112.152, 192.168.201.249 | N/A (booted from volume) | SCS-1L-1 | 2026-03-31 04:14:54.818501 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-03-31 04:14:55.194152 | orchestrator | + openstack --os-cloud test server show test 2026-03-31 04:14:58.723984 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-31 04:14:58.724156 | orchestrator | | Field | Value | 2026-03-31 04:14:58.724176 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-31 04:14:58.724206 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-31 04:14:58.724218 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-31 04:14:58.724230 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-31 04:14:58.724248 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-03-31 04:14:58.724260 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-31 04:14:58.724271 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-31 04:14:58.724301 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-31 04:14:58.724313 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-31 04:14:58.724324 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-31 04:14:58.724347 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-31 04:14:58.724358 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-31 04:14:58.724369 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-31 04:14:58.724380 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-31 04:14:58.724510 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-31 04:14:58.724525 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-31 04:14:58.724539 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-31T04:13:07.000000 | 2026-03-31 04:14:58.724560 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-31 04:14:58.724573 | orchestrator | | accessIPv4 | | 2026-03-31 04:14:58.724587 | orchestrator | | accessIPv6 | | 2026-03-31 04:14:58.724608 | orchestrator | | addresses | test-1=192.168.112.169, 192.168.200.145 | 2026-03-31 04:14:58.724622 | orchestrator | | config_drive | | 2026-03-31 04:14:58.724635 | orchestrator | | created | 2026-03-31T04:12:41Z | 2026-03-31 04:14:58.724648 | orchestrator | | description | None | 2026-03-31 04:14:58.724666 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-31 04:14:58.724679 | orchestrator | | hostId | dea13d7d1f0f26d77ee47e184597d8f09de4bfdd62a042f2adf93945 | 2026-03-31 04:14:58.724692 | orchestrator | | host_status | None | 2026-03-31 04:14:58.724713 | orchestrator | | id | 308f9ffb-fc2b-43c2-b873-e59940a22370 | 2026-03-31 04:14:58.724726 | orchestrator | | image | N/A (booted from volume) | 2026-03-31 04:14:58.724749 | orchestrator | | key_name | test | 2026-03-31 04:14:58.724760 | orchestrator | | locked | False | 2026-03-31 04:14:58.724774 | orchestrator | | locked_reason | None | 2026-03-31 04:14:58.724792 | orchestrator | | name | test | 2026-03-31 04:14:58.724807 | orchestrator | | pinned_availability_zone | None | 2026-03-31 04:14:58.724819 | orchestrator | | progress | 0 | 2026-03-31 04:14:58.724837 | orchestrator | | project_id | 8dc41ee96d394daca90c4a89275123ba | 2026-03-31 04:14:58.724849 | orchestrator | | properties | hostname='test' | 2026-03-31 04:14:58.724868 | orchestrator | | security_groups | name='ssh' | 2026-03-31 04:14:58.724886 | orchestrator | | | name='icmp' | 2026-03-31 04:14:58.724898 | orchestrator | | server_groups | None | 2026-03-31 04:14:58.724909 | orchestrator | | status | ACTIVE | 2026-03-31 04:14:58.724920 | orchestrator | | tags | test | 2026-03-31 04:14:58.724931 | orchestrator | | trusted_image_certificates | None | 2026-03-31 04:14:58.724943 | orchestrator | | updated | 2026-03-31T04:13:40Z | 2026-03-31 04:14:58.724963 | orchestrator | | user_id | 7ef74602a8e54b1a9be1b23be6b83e77 | 2026-03-31 04:14:58.724974 | orchestrator | | volumes_attached | delete_on_termination='True', id='cc5a65d6-3da4-4ded-a86b-37cae869707b' | 2026-03-31 04:14:58.724985 | orchestrator | | | delete_on_termination='False', id='9e7bbd39-83ea-4985-8c09-440450c7e3ec' | 2026-03-31 04:14:58.728194 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-31 04:14:59.078257 | orchestrator | + openstack --os-cloud test server show test-1 2026-03-31 04:15:02.482320 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-31 04:15:02.482434 | orchestrator | | Field | Value | 2026-03-31 04:15:02.482449 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-31 04:15:02.482459 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-31 04:15:02.482468 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-31 04:15:02.482492 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-31 04:15:02.482502 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-03-31 04:15:02.482511 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-31 04:15:02.482539 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-31 04:15:02.482564 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-31 04:15:02.482574 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-31 04:15:02.482584 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-31 04:15:02.482593 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-31 04:15:02.482602 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-31 04:15:02.482611 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-31 04:15:02.482624 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-31 04:15:02.482633 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-31 04:15:02.482642 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-31 04:15:02.482658 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-31T04:13:09.000000 | 2026-03-31 04:15:02.482673 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-31 04:15:02.482682 | orchestrator | | accessIPv4 | | 2026-03-31 04:15:02.482691 | orchestrator | | accessIPv6 | | 2026-03-31 04:15:02.482700 | orchestrator | | addresses | test-1=192.168.112.173, 192.168.200.150 | 2026-03-31 04:15:02.482709 | orchestrator | | config_drive | | 2026-03-31 04:15:02.482718 | orchestrator | | created | 2026-03-31T04:12:42Z | 2026-03-31 04:15:02.482731 | orchestrator | | description | None | 2026-03-31 04:15:02.482740 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-31 04:15:02.482755 | orchestrator | | hostId | dea13d7d1f0f26d77ee47e184597d8f09de4bfdd62a042f2adf93945 | 2026-03-31 04:15:02.482764 | orchestrator | | host_status | None | 2026-03-31 04:15:02.482778 | orchestrator | | id | e438d2ac-6fc3-45bf-9645-b144c88218c1 | 2026-03-31 04:15:02.482788 | orchestrator | | image | N/A (booted from volume) | 2026-03-31 04:15:02.482797 | orchestrator | | key_name | test | 2026-03-31 04:15:02.482806 | orchestrator | | locked | False | 2026-03-31 04:15:02.482815 | orchestrator | | locked_reason | None | 2026-03-31 04:15:02.482836 | orchestrator | | name | test-1 | 2026-03-31 04:15:02.482850 | orchestrator | | pinned_availability_zone | None | 2026-03-31 04:15:02.482866 | orchestrator | | progress | 0 | 2026-03-31 04:15:02.482875 | orchestrator | | project_id | 8dc41ee96d394daca90c4a89275123ba | 2026-03-31 04:15:02.482884 | orchestrator | | properties | hostname='test-1' | 2026-03-31 04:15:02.482898 | orchestrator | | security_groups | name='ssh' | 2026-03-31 04:15:02.482908 | orchestrator | | | name='icmp' | 2026-03-31 04:15:02.482918 | orchestrator | | server_groups | None | 2026-03-31 04:15:02.482927 | orchestrator | | status | ACTIVE | 2026-03-31 04:15:02.482936 | orchestrator | | tags | test | 2026-03-31 04:15:02.482945 | orchestrator | | trusted_image_certificates | None | 2026-03-31 04:15:02.482960 | orchestrator | | updated | 2026-03-31T04:13:41Z | 2026-03-31 04:15:02.482969 | orchestrator | | user_id | 7ef74602a8e54b1a9be1b23be6b83e77 | 2026-03-31 04:15:02.482979 | orchestrator | | volumes_attached | delete_on_termination='True', id='26cc9bd9-9f3c-460b-9adb-a94f7e860c4d' | 2026-03-31 04:15:02.487968 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-31 04:15:02.837159 | orchestrator | + openstack --os-cloud test server show test-2 2026-03-31 04:15:06.032064 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-31 04:15:06.032183 | orchestrator | | Field | Value | 2026-03-31 04:15:06.032199 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-31 04:15:06.032209 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-31 04:15:06.032216 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-31 04:15:06.032221 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-31 04:15:06.032243 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-03-31 04:15:06.032248 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-31 04:15:06.032252 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-31 04:15:06.032271 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-31 04:15:06.032276 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-31 04:15:06.032281 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-31 04:15:06.032285 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-31 04:15:06.032290 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-31 04:15:06.032294 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-31 04:15:06.032302 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-31 04:15:06.032310 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-31 04:15:06.032314 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-31 04:15:06.032319 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-31T04:13:07.000000 | 2026-03-31 04:15:06.032328 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-31 04:15:06.032333 | orchestrator | | accessIPv4 | | 2026-03-31 04:15:06.032337 | orchestrator | | accessIPv6 | | 2026-03-31 04:15:06.032342 | orchestrator | | addresses | test-2=192.168.112.152, 192.168.201.249 | 2026-03-31 04:15:06.032346 | orchestrator | | config_drive | | 2026-03-31 04:15:06.032360 | orchestrator | | created | 2026-03-31T04:12:41Z | 2026-03-31 04:15:06.032367 | orchestrator | | description | None | 2026-03-31 04:15:06.032371 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-31 04:15:06.032376 | orchestrator | | hostId | dea13d7d1f0f26d77ee47e184597d8f09de4bfdd62a042f2adf93945 | 2026-03-31 04:15:06.032380 | orchestrator | | host_status | None | 2026-03-31 04:15:06.032389 | orchestrator | | id | 4819050c-89e5-4d5f-80e6-f03e7e044fd1 | 2026-03-31 04:15:06.032393 | orchestrator | | image | N/A (booted from volume) | 2026-03-31 04:15:06.032398 | orchestrator | | key_name | test | 2026-03-31 04:15:06.032402 | orchestrator | | locked | False | 2026-03-31 04:15:06.032410 | orchestrator | | locked_reason | None | 2026-03-31 04:15:06.032415 | orchestrator | | name | test-2 | 2026-03-31 04:15:06.032422 | orchestrator | | pinned_availability_zone | None | 2026-03-31 04:15:06.032426 | orchestrator | | progress | 0 | 2026-03-31 04:15:06.032431 | orchestrator | | project_id | 8dc41ee96d394daca90c4a89275123ba | 2026-03-31 04:15:06.032435 | orchestrator | | properties | hostname='test-2' | 2026-03-31 04:15:06.032443 | orchestrator | | security_groups | name='ssh' | 2026-03-31 04:15:06.032448 | orchestrator | | | name='icmp' | 2026-03-31 04:15:06.032453 | orchestrator | | server_groups | None | 2026-03-31 04:15:06.032457 | orchestrator | | status | ACTIVE | 2026-03-31 04:15:06.032465 | orchestrator | | tags | test | 2026-03-31 04:15:06.032469 | orchestrator | | trusted_image_certificates | None | 2026-03-31 04:15:06.032476 | orchestrator | | updated | 2026-03-31T04:13:42Z | 2026-03-31 04:15:06.032481 | orchestrator | | user_id | 7ef74602a8e54b1a9be1b23be6b83e77 | 2026-03-31 04:15:06.032486 | orchestrator | | volumes_attached | delete_on_termination='True', id='4d5367ec-bcc4-4ee3-92f1-20a3ef1f5944' | 2026-03-31 04:15:06.037540 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-31 04:15:06.376185 | orchestrator | + openstack --os-cloud test server show test-3 2026-03-31 04:15:09.717904 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-31 04:15:09.718153 | orchestrator | | Field | Value | 2026-03-31 04:15:09.718178 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-31 04:15:09.718215 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-31 04:15:09.718228 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-31 04:15:09.718239 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-31 04:15:09.718278 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-03-31 04:15:09.718298 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-31 04:15:09.718317 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-31 04:15:09.718362 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-31 04:15:09.718380 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-31 04:15:09.718399 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-31 04:15:09.718431 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-31 04:15:09.718453 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-31 04:15:09.718474 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-31 04:15:09.718493 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-31 04:15:09.718508 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-31 04:15:09.718521 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-31 04:15:09.718535 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-31T04:13:09.000000 | 2026-03-31 04:15:09.718558 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-31 04:15:09.718572 | orchestrator | | accessIPv4 | | 2026-03-31 04:15:09.718592 | orchestrator | | accessIPv6 | | 2026-03-31 04:15:09.718606 | orchestrator | | addresses | test-2=192.168.112.178, 192.168.201.252 | 2026-03-31 04:15:09.719084 | orchestrator | | config_drive | | 2026-03-31 04:15:09.719107 | orchestrator | | created | 2026-03-31T04:12:45Z | 2026-03-31 04:15:09.719120 | orchestrator | | description | None | 2026-03-31 04:15:09.719133 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-31 04:15:09.719145 | orchestrator | | hostId | a5b60a77e5cfa2458112bb1b4c62d4aecf9f94f698bd0ab66786e2de | 2026-03-31 04:15:09.719156 | orchestrator | | host_status | None | 2026-03-31 04:15:09.719176 | orchestrator | | id | e004de17-c8a7-44d3-adfc-0ea44942548f | 2026-03-31 04:15:09.719196 | orchestrator | | image | N/A (booted from volume) | 2026-03-31 04:15:09.719208 | orchestrator | | key_name | test | 2026-03-31 04:15:09.719219 | orchestrator | | locked | False | 2026-03-31 04:15:09.719235 | orchestrator | | locked_reason | None | 2026-03-31 04:15:09.719247 | orchestrator | | name | test-3 | 2026-03-31 04:15:09.719258 | orchestrator | | pinned_availability_zone | None | 2026-03-31 04:15:09.719269 | orchestrator | | progress | 0 | 2026-03-31 04:15:09.719280 | orchestrator | | project_id | 8dc41ee96d394daca90c4a89275123ba | 2026-03-31 04:15:09.719291 | orchestrator | | properties | hostname='test-3' | 2026-03-31 04:15:09.719310 | orchestrator | | security_groups | name='ssh' | 2026-03-31 04:15:09.719330 | orchestrator | | | name='icmp' | 2026-03-31 04:15:09.719342 | orchestrator | | server_groups | None | 2026-03-31 04:15:09.719353 | orchestrator | | status | ACTIVE | 2026-03-31 04:15:09.719369 | orchestrator | | tags | test | 2026-03-31 04:15:09.719380 | orchestrator | | trusted_image_certificates | None | 2026-03-31 04:15:09.719391 | orchestrator | | updated | 2026-03-31T04:13:43Z | 2026-03-31 04:15:09.719403 | orchestrator | | user_id | 7ef74602a8e54b1a9be1b23be6b83e77 | 2026-03-31 04:15:09.719414 | orchestrator | | volumes_attached | delete_on_termination='True', id='987f588f-42c6-40e7-a815-6502850a1620' | 2026-03-31 04:15:09.722685 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-31 04:15:10.068427 | orchestrator | + openstack --os-cloud test server show test-4 2026-03-31 04:15:13.391429 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-31 04:15:13.391533 | orchestrator | | Field | Value | 2026-03-31 04:15:13.391546 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-31 04:15:13.391556 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-31 04:15:13.391580 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-31 04:15:13.391589 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-31 04:15:13.391597 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-03-31 04:15:13.391605 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-31 04:15:13.391613 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-31 04:15:13.391657 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-31 04:15:13.391665 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-31 04:15:13.391673 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-31 04:15:13.391681 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-31 04:15:13.391689 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-31 04:15:13.391700 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-31 04:15:13.391708 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-31 04:15:13.391716 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-31 04:15:13.391724 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-31 04:15:13.391737 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-31T04:13:09.000000 | 2026-03-31 04:15:13.391750 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-31 04:15:13.391759 | orchestrator | | accessIPv4 | | 2026-03-31 04:15:13.391767 | orchestrator | | accessIPv6 | | 2026-03-31 04:15:13.391775 | orchestrator | | addresses | test-3=192.168.112.108, 192.168.202.149 | 2026-03-31 04:15:13.391788 | orchestrator | | config_drive | | 2026-03-31 04:15:13.391796 | orchestrator | | created | 2026-03-31T04:12:43Z | 2026-03-31 04:15:13.391804 | orchestrator | | description | None | 2026-03-31 04:15:13.391811 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-31 04:15:13.391824 | orchestrator | | hostId | dea13d7d1f0f26d77ee47e184597d8f09de4bfdd62a042f2adf93945 | 2026-03-31 04:15:13.391832 | orchestrator | | host_status | None | 2026-03-31 04:15:13.391847 | orchestrator | | id | e8bfbb64-a496-4aa5-bc9b-e254a6b37bf6 | 2026-03-31 04:15:13.391855 | orchestrator | | image | N/A (booted from volume) | 2026-03-31 04:15:13.391863 | orchestrator | | key_name | test | 2026-03-31 04:15:13.391871 | orchestrator | | locked | False | 2026-03-31 04:15:13.391883 | orchestrator | | locked_reason | None | 2026-03-31 04:15:13.391891 | orchestrator | | name | test-4 | 2026-03-31 04:15:13.391899 | orchestrator | | pinned_availability_zone | None | 2026-03-31 04:15:13.391908 | orchestrator | | progress | 0 | 2026-03-31 04:15:13.391921 | orchestrator | | project_id | 8dc41ee96d394daca90c4a89275123ba | 2026-03-31 04:15:13.391930 | orchestrator | | properties | hostname='test-4' | 2026-03-31 04:15:13.391943 | orchestrator | | security_groups | name='ssh' | 2026-03-31 04:15:13.391952 | orchestrator | | | name='icmp' | 2026-03-31 04:15:13.391961 | orchestrator | | server_groups | None | 2026-03-31 04:15:13.391969 | orchestrator | | status | ACTIVE | 2026-03-31 04:15:13.391981 | orchestrator | | tags | test | 2026-03-31 04:15:13.392015 | orchestrator | | trusted_image_certificates | None | 2026-03-31 04:15:13.392024 | orchestrator | | updated | 2026-03-31T04:13:44Z | 2026-03-31 04:15:13.392039 | orchestrator | | user_id | 7ef74602a8e54b1a9be1b23be6b83e77 | 2026-03-31 04:15:13.392048 | orchestrator | | volumes_attached | delete_on_termination='True', id='2dcc552e-168e-449f-b0c7-fbedd8be0265' | 2026-03-31 04:15:13.395773 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-31 04:15:13.716687 | orchestrator | + server_ping 2026-03-31 04:15:13.717874 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-31 04:15:13.717924 | orchestrator | ++ tr -d '\r' 2026-03-31 04:15:17.295435 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-31 04:15:17.295528 | orchestrator | + ping -c3 192.168.112.152 2026-03-31 04:15:17.311240 | orchestrator | PING 192.168.112.152 (192.168.112.152) 56(84) bytes of data. 2026-03-31 04:15:17.311329 | orchestrator | 64 bytes from 192.168.112.152: icmp_seq=1 ttl=63 time=7.20 ms 2026-03-31 04:15:18.308584 | orchestrator | 64 bytes from 192.168.112.152: icmp_seq=2 ttl=63 time=3.06 ms 2026-03-31 04:15:19.309846 | orchestrator | 64 bytes from 192.168.112.152: icmp_seq=3 ttl=63 time=2.04 ms 2026-03-31 04:15:19.309957 | orchestrator | 2026-03-31 04:15:19.309974 | orchestrator | --- 192.168.112.152 ping statistics --- 2026-03-31 04:15:19.310100 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-31 04:15:19.310116 | orchestrator | rtt min/avg/max/mdev = 2.036/4.096/7.198/2.232 ms 2026-03-31 04:15:19.310469 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-31 04:15:19.310494 | orchestrator | + ping -c3 192.168.112.173 2026-03-31 04:15:19.322437 | orchestrator | PING 192.168.112.173 (192.168.112.173) 56(84) bytes of data. 2026-03-31 04:15:19.322539 | orchestrator | 64 bytes from 192.168.112.173: icmp_seq=1 ttl=63 time=7.52 ms 2026-03-31 04:15:20.320202 | orchestrator | 64 bytes from 192.168.112.173: icmp_seq=2 ttl=63 time=3.11 ms 2026-03-31 04:15:21.321121 | orchestrator | 64 bytes from 192.168.112.173: icmp_seq=3 ttl=63 time=2.46 ms 2026-03-31 04:15:21.321225 | orchestrator | 2026-03-31 04:15:21.321242 | orchestrator | --- 192.168.112.173 ping statistics --- 2026-03-31 04:15:21.321255 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-31 04:15:21.321267 | orchestrator | rtt min/avg/max/mdev = 2.460/4.362/7.520/2.248 ms 2026-03-31 04:15:21.321756 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-31 04:15:21.321782 | orchestrator | + ping -c3 192.168.112.169 2026-03-31 04:15:21.338399 | orchestrator | PING 192.168.112.169 (192.168.112.169) 56(84) bytes of data. 2026-03-31 04:15:21.338501 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=1 ttl=63 time=11.4 ms 2026-03-31 04:15:22.332051 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=2 ttl=63 time=3.11 ms 2026-03-31 04:15:23.332196 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=3 ttl=63 time=2.07 ms 2026-03-31 04:15:23.332300 | orchestrator | 2026-03-31 04:15:23.332343 | orchestrator | --- 192.168.112.169 ping statistics --- 2026-03-31 04:15:23.332357 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-31 04:15:23.332368 | orchestrator | rtt min/avg/max/mdev = 2.066/5.524/11.395/4.172 ms 2026-03-31 04:15:23.333117 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-31 04:15:23.333160 | orchestrator | + ping -c3 192.168.112.178 2026-03-31 04:15:23.346346 | orchestrator | PING 192.168.112.178 (192.168.112.178) 56(84) bytes of data. 2026-03-31 04:15:23.346441 | orchestrator | 64 bytes from 192.168.112.178: icmp_seq=1 ttl=63 time=7.27 ms 2026-03-31 04:15:24.343440 | orchestrator | 64 bytes from 192.168.112.178: icmp_seq=2 ttl=63 time=2.57 ms 2026-03-31 04:15:25.345115 | orchestrator | 64 bytes from 192.168.112.178: icmp_seq=3 ttl=63 time=1.54 ms 2026-03-31 04:15:25.345203 | orchestrator | 2026-03-31 04:15:25.345215 | orchestrator | --- 192.168.112.178 ping statistics --- 2026-03-31 04:15:25.345225 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-03-31 04:15:25.345233 | orchestrator | rtt min/avg/max/mdev = 1.538/3.792/7.270/2.494 ms 2026-03-31 04:15:25.345247 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-31 04:15:25.345261 | orchestrator | + ping -c3 192.168.112.108 2026-03-31 04:15:25.360797 | orchestrator | PING 192.168.112.108 (192.168.112.108) 56(84) bytes of data. 2026-03-31 04:15:25.360879 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=1 ttl=63 time=9.83 ms 2026-03-31 04:15:26.354173 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=2 ttl=63 time=2.29 ms 2026-03-31 04:15:27.356236 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=3 ttl=63 time=2.14 ms 2026-03-31 04:15:27.356344 | orchestrator | 2026-03-31 04:15:27.356360 | orchestrator | --- 192.168.112.108 ping statistics --- 2026-03-31 04:15:27.356373 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-31 04:15:27.356384 | orchestrator | rtt min/avg/max/mdev = 2.140/4.755/9.834/3.591 ms 2026-03-31 04:15:27.356557 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-31 04:15:27.552942 | orchestrator | ok: Runtime: 0:09:26.929027 2026-03-31 04:15:27.599605 | 2026-03-31 04:15:27.599738 | TASK [Run tempest] 2026-03-31 04:15:28.133159 | orchestrator | skipping: Conditional result was False 2026-03-31 04:15:28.145153 | 2026-03-31 04:15:28.145288 | TASK [Check prometheus alert status] 2026-03-31 04:15:28.679107 | orchestrator | skipping: Conditional result was False 2026-03-31 04:15:28.693303 | 2026-03-31 04:15:28.693454 | PLAY [Upgrade testbed] 2026-03-31 04:15:28.705829 | 2026-03-31 04:15:28.705951 | TASK [Print next ceph version] 2026-03-31 04:15:28.784495 | orchestrator | ok 2026-03-31 04:15:28.793705 | 2026-03-31 04:15:28.793875 | TASK [Print next openstack version] 2026-03-31 04:15:28.873433 | orchestrator | ok 2026-03-31 04:15:28.884901 | 2026-03-31 04:15:28.885035 | TASK [Print next manager version] 2026-03-31 04:15:28.957993 | orchestrator | ok 2026-03-31 04:15:28.968717 | 2026-03-31 04:15:28.968869 | TASK [Set cloud fact (Zuul deployment)] 2026-03-31 04:15:29.038283 | orchestrator | ok 2026-03-31 04:15:29.052695 | 2026-03-31 04:15:29.052869 | TASK [Set cloud fact (local deployment)] 2026-03-31 04:15:29.088951 | orchestrator | skipping: Conditional result was False 2026-03-31 04:15:29.105570 | 2026-03-31 04:15:29.105728 | TASK [Fetch manager address] 2026-03-31 04:15:29.382383 | orchestrator | ok 2026-03-31 04:15:29.392697 | 2026-03-31 04:15:29.392909 | TASK [Set manager_host address] 2026-03-31 04:15:29.473724 | orchestrator | ok 2026-03-31 04:15:29.485512 | 2026-03-31 04:15:29.485687 | TASK [Run upgrade] 2026-03-31 04:15:30.195687 | orchestrator | + set -e 2026-03-31 04:15:30.195919 | orchestrator | + export MANAGER_VERSION=10.0.0 2026-03-31 04:15:30.195945 | orchestrator | + MANAGER_VERSION=10.0.0 2026-03-31 04:15:30.195956 | orchestrator | + CEPH_VERSION=reef 2026-03-31 04:15:30.195963 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-03-31 04:15:30.195971 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-03-31 04:15:30.196023 | orchestrator | + sh -c '/opt/configuration/scripts/upgrade-manager.sh 10.0.0 reef 2024.2 kolla/release' 2026-03-31 04:15:30.207264 | orchestrator | + set -e 2026-03-31 04:15:30.207338 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-31 04:15:30.207346 | orchestrator | ++ export INTERACTIVE=false 2026-03-31 04:15:30.207355 | orchestrator | ++ INTERACTIVE=false 2026-03-31 04:15:30.207360 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-31 04:15:30.207366 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-31 04:15:30.208157 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "org.opencontainers.image.version"}}' osism-ansible 2026-03-31 04:15:30.246490 | orchestrator | + OLD_MANAGER_VERSION=v0.20251130.0 2026-03-31 04:15:30.247384 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-03-31 04:15:30.285968 | orchestrator | 2026-03-31 04:15:30.286140 | orchestrator | # UPGRADE MANAGER 2026-03-31 04:15:30.286152 | orchestrator | 2026-03-31 04:15:30.286157 | orchestrator | + OLD_OPENSTACK_VERSION=2024.2 2026-03-31 04:15:30.286164 | orchestrator | + echo 2026-03-31 04:15:30.286170 | orchestrator | + echo '# UPGRADE MANAGER' 2026-03-31 04:15:30.286176 | orchestrator | + echo 2026-03-31 04:15:30.286181 | orchestrator | + export MANAGER_VERSION=10.0.0 2026-03-31 04:15:30.286186 | orchestrator | + MANAGER_VERSION=10.0.0 2026-03-31 04:15:30.286191 | orchestrator | + CEPH_VERSION=reef 2026-03-31 04:15:30.286196 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-03-31 04:15:30.286201 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-03-31 04:15:30.286206 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 10.0.0 2026-03-31 04:15:30.295044 | orchestrator | + set -e 2026-03-31 04:15:30.295150 | orchestrator | + VERSION=10.0.0 2026-03-31 04:15:30.295166 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 10.0.0/g' /opt/configuration/environments/manager/configuration.yml 2026-03-31 04:15:30.301358 | orchestrator | + [[ 10.0.0 != \l\a\t\e\s\t ]] 2026-03-31 04:15:30.301447 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-31 04:15:30.305914 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-31 04:15:30.309484 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-03-31 04:15:30.317504 | orchestrator | /opt/configuration ~ 2026-03-31 04:15:30.317579 | orchestrator | + set -e 2026-03-31 04:15:30.317588 | orchestrator | + pushd /opt/configuration 2026-03-31 04:15:30.317595 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-31 04:15:30.317604 | orchestrator | + source /opt/venv/bin/activate 2026-03-31 04:15:30.318824 | orchestrator | ++ deactivate nondestructive 2026-03-31 04:15:30.318866 | orchestrator | ++ '[' -n '' ']' 2026-03-31 04:15:30.318876 | orchestrator | ++ '[' -n '' ']' 2026-03-31 04:15:30.318886 | orchestrator | ++ hash -r 2026-03-31 04:15:30.318895 | orchestrator | ++ '[' -n '' ']' 2026-03-31 04:15:30.318904 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-31 04:15:30.318913 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-31 04:15:30.318921 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-31 04:15:30.319063 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-31 04:15:30.319078 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-31 04:15:30.319088 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-31 04:15:30.319098 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-31 04:15:30.319109 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-31 04:15:30.319119 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-31 04:15:30.319128 | orchestrator | ++ export PATH 2026-03-31 04:15:30.319140 | orchestrator | ++ '[' -n '' ']' 2026-03-31 04:15:30.319150 | orchestrator | ++ '[' -z '' ']' 2026-03-31 04:15:30.319159 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-31 04:15:30.319167 | orchestrator | ++ PS1='(venv) ' 2026-03-31 04:15:30.319176 | orchestrator | ++ export PS1 2026-03-31 04:15:30.319184 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-31 04:15:30.319193 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-31 04:15:30.319201 | orchestrator | ++ hash -r 2026-03-31 04:15:30.319373 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-03-31 04:15:31.695919 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-03-31 04:15:31.696705 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.33.1) 2026-03-31 04:15:31.698714 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-03-31 04:15:31.700254 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-03-31 04:15:31.701174 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-03-31 04:15:31.713244 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-03-31 04:15:31.715014 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-03-31 04:15:31.716260 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-03-31 04:15:31.717675 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-03-31 04:15:31.776062 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.6) 2026-03-31 04:15:31.779126 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-03-31 04:15:31.784146 | orchestrator | Requirement already satisfied: urllib3<3,>=1.26 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-03-31 04:15:31.787514 | orchestrator | Requirement already satisfied: certifi>=2023.5.7 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-03-31 04:15:31.795326 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-03-31 04:15:32.155520 | orchestrator | ++ which gilt 2026-03-31 04:15:32.158844 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-03-31 04:15:32.158924 | orchestrator | + /opt/venv/bin/gilt overlay 2026-03-31 04:15:32.459000 | orchestrator | osism.cfg-generics: 2026-03-31 04:15:32.573495 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-03-31 04:15:32.574915 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-03-31 04:15:32.576212 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-03-31 04:15:32.576277 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-03-31 04:15:33.857046 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-03-31 04:15:33.868608 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-03-31 04:15:34.422513 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-03-31 04:15:34.496505 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-31 04:15:34.496600 | orchestrator | + deactivate 2026-03-31 04:15:34.496615 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-31 04:15:34.496628 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-31 04:15:34.496638 | orchestrator | + export PATH 2026-03-31 04:15:34.496647 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-31 04:15:34.496657 | orchestrator | + '[' -n '' ']' 2026-03-31 04:15:34.496666 | orchestrator | + hash -r 2026-03-31 04:15:34.496676 | orchestrator | ~ 2026-03-31 04:15:34.496685 | orchestrator | + '[' -n '' ']' 2026-03-31 04:15:34.496694 | orchestrator | + unset VIRTUAL_ENV 2026-03-31 04:15:34.496704 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-31 04:15:34.496713 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-31 04:15:34.496723 | orchestrator | + unset -f deactivate 2026-03-31 04:15:34.496733 | orchestrator | + popd 2026-03-31 04:15:34.499145 | orchestrator | + [[ 10.0.0 == \l\a\t\e\s\t ]] 2026-03-31 04:15:34.499209 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-03-31 04:15:34.503946 | orchestrator | + set -e 2026-03-31 04:15:34.504048 | orchestrator | + NAMESPACE=kolla/release 2026-03-31 04:15:34.504061 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-31 04:15:34.512039 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-03-31 04:15:34.518175 | orchestrator | /opt/configuration ~ 2026-03-31 04:15:34.518237 | orchestrator | + set -e 2026-03-31 04:15:34.518250 | orchestrator | + pushd /opt/configuration 2026-03-31 04:15:34.518262 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-31 04:15:34.518273 | orchestrator | + source /opt/venv/bin/activate 2026-03-31 04:15:34.518285 | orchestrator | ++ deactivate nondestructive 2026-03-31 04:15:34.518296 | orchestrator | ++ '[' -n '' ']' 2026-03-31 04:15:34.518307 | orchestrator | ++ '[' -n '' ']' 2026-03-31 04:15:34.518318 | orchestrator | ++ hash -r 2026-03-31 04:15:34.518329 | orchestrator | ++ '[' -n '' ']' 2026-03-31 04:15:34.518340 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-31 04:15:34.518351 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-31 04:15:34.518362 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-31 04:15:34.518373 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-31 04:15:34.518384 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-31 04:15:34.518395 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-31 04:15:34.518411 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-31 04:15:34.518426 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-31 04:15:34.518438 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-31 04:15:34.518449 | orchestrator | ++ export PATH 2026-03-31 04:15:34.518460 | orchestrator | ++ '[' -n '' ']' 2026-03-31 04:15:34.518471 | orchestrator | ++ '[' -z '' ']' 2026-03-31 04:15:34.518482 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-31 04:15:34.518492 | orchestrator | ++ PS1='(venv) ' 2026-03-31 04:15:34.518503 | orchestrator | ++ export PS1 2026-03-31 04:15:34.518515 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-31 04:15:34.518525 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-31 04:15:34.518536 | orchestrator | ++ hash -r 2026-03-31 04:15:34.518548 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-03-31 04:15:35.096842 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-03-31 04:15:35.098098 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.33.1) 2026-03-31 04:15:35.099704 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-03-31 04:15:35.101582 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-03-31 04:15:35.102417 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-03-31 04:15:35.115144 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-03-31 04:15:35.117177 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-03-31 04:15:35.118261 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-03-31 04:15:35.119845 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-03-31 04:15:35.177104 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.6) 2026-03-31 04:15:35.178602 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-03-31 04:15:35.180456 | orchestrator | Requirement already satisfied: urllib3<3,>=1.26 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-03-31 04:15:35.182301 | orchestrator | Requirement already satisfied: certifi>=2023.5.7 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-03-31 04:15:35.186578 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-03-31 04:15:35.536824 | orchestrator | ++ which gilt 2026-03-31 04:15:35.540197 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-03-31 04:15:35.540275 | orchestrator | + /opt/venv/bin/gilt overlay 2026-03-31 04:15:35.760029 | orchestrator | osism.cfg-generics: 2026-03-31 04:15:35.850058 | orchestrator | - copied (v0.20260319.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-03-31 04:15:35.850248 | orchestrator | - copied (v0.20260319.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-03-31 04:15:35.850670 | orchestrator | - copied (v0.20260319.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-03-31 04:15:35.850680 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-03-31 04:15:36.480128 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-03-31 04:15:36.490557 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-03-31 04:15:36.898291 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-03-31 04:15:36.970478 | orchestrator | ~ 2026-03-31 04:15:36.970588 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-31 04:15:36.970611 | orchestrator | + deactivate 2026-03-31 04:15:36.970630 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-31 04:15:36.970649 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-31 04:15:36.970664 | orchestrator | + export PATH 2026-03-31 04:15:36.970682 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-31 04:15:36.970699 | orchestrator | + '[' -n '' ']' 2026-03-31 04:15:36.970717 | orchestrator | + hash -r 2026-03-31 04:15:36.970734 | orchestrator | + '[' -n '' ']' 2026-03-31 04:15:36.970751 | orchestrator | + unset VIRTUAL_ENV 2026-03-31 04:15:36.970767 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-31 04:15:36.970781 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-31 04:15:36.970791 | orchestrator | + unset -f deactivate 2026-03-31 04:15:36.970801 | orchestrator | + popd 2026-03-31 04:15:36.972694 | orchestrator | ++ semver v0.20251130.0 6.0.0 2026-03-31 04:15:37.038221 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-31 04:15:37.038712 | orchestrator | ++ semver 10.0.0 10.0.0-0 2026-03-31 04:15:37.111233 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-31 04:15:37.111383 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-03-31 04:15:37.115231 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-03-31 04:15:37.119294 | orchestrator | +++ semver v0.20251130.0 9.5.0 2026-03-31 04:15:37.176895 | orchestrator | ++ '[' -1 -le 0 ']' 2026-03-31 04:15:37.178153 | orchestrator | +++ semver 10.0.0 10.0.0-0 2026-03-31 04:15:37.250915 | orchestrator | ++ '[' 1 -ge 0 ']' 2026-03-31 04:15:37.251017 | orchestrator | ++ echo true 2026-03-31 04:15:37.251031 | orchestrator | + MANAGER_UPGRADE_CROSSES_10=true 2026-03-31 04:15:37.253119 | orchestrator | +++ semver 2024.2 2024.2 2026-03-31 04:15:37.337075 | orchestrator | ++ '[' 0 -le 0 ']' 2026-03-31 04:15:37.338237 | orchestrator | +++ semver 2024.2 2025.1 2026-03-31 04:15:37.406319 | orchestrator | ++ '[' -1 -ge 0 ']' 2026-03-31 04:15:37.406423 | orchestrator | ++ echo false 2026-03-31 04:15:37.406646 | orchestrator | + OPENSTACK_UPGRADE_CROSSES_2025=false 2026-03-31 04:15:37.406731 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-31 04:15:37.406755 | orchestrator | + echo 'om_rpc_vhost: openstack' 2026-03-31 04:15:37.406822 | orchestrator | + echo 'om_notify_vhost: openstack' 2026-03-31 04:15:37.406875 | orchestrator | + sed -i 's#manager_listener_broker_vhost: .*#manager_listener_broker_vhost: /openstack#g' /opt/configuration/environments/manager/configuration.yml 2026-03-31 04:15:37.412824 | orchestrator | + echo 'export RABBITMQ3TO4=true' 2026-03-31 04:15:37.412916 | orchestrator | + sudo tee -a /opt/manager-vars.sh 2026-03-31 04:15:37.435744 | orchestrator | export RABBITMQ3TO4=true 2026-03-31 04:15:37.439049 | orchestrator | + osism update manager 2026-03-31 04:15:43.518434 | orchestrator | Collecting uv 2026-03-31 04:15:43.621679 | orchestrator | Downloading uv-0.11.2-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (11 kB) 2026-03-31 04:15:43.642871 | orchestrator | Downloading uv-0.11.2-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (24.6 MB) 2026-03-31 04:15:44.475163 | orchestrator | ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 24.6/24.6 MB 34.1 MB/s eta 0:00:00 2026-03-31 04:15:44.569864 | orchestrator | Installing collected packages: uv 2026-03-31 04:15:45.099243 | orchestrator | Successfully installed uv-0.11.2 2026-03-31 04:15:45.900403 | orchestrator | Resolved 11 packages in 371ms 2026-03-31 04:15:45.946387 | orchestrator | Downloading cryptography (4.3MiB) 2026-03-31 04:15:45.946570 | orchestrator | Downloading ansible-core (2.1MiB) 2026-03-31 04:15:45.946704 | orchestrator | Downloading netaddr (2.2MiB) 2026-03-31 04:15:45.946770 | orchestrator | Downloading ansible (54.5MiB) 2026-03-31 04:15:46.344873 | orchestrator | Downloaded netaddr 2026-03-31 04:15:46.471496 | orchestrator | Downloaded cryptography 2026-03-31 04:15:46.574321 | orchestrator | Downloaded ansible-core 2026-03-31 04:15:54.521736 | orchestrator | Downloaded ansible 2026-03-31 04:15:54.522363 | orchestrator | Prepared 11 packages in 8.62s 2026-03-31 04:15:55.162465 | orchestrator | Installed 11 packages in 639ms 2026-03-31 04:15:55.162561 | orchestrator | + ansible==11.11.0 2026-03-31 04:15:55.162601 | orchestrator | + ansible-core==2.18.15 2026-03-31 04:15:55.162613 | orchestrator | + cffi==2.0.0 2026-03-31 04:15:55.162624 | orchestrator | + cryptography==46.0.6 2026-03-31 04:15:55.162635 | orchestrator | + jinja2==3.1.6 2026-03-31 04:15:55.162645 | orchestrator | + markupsafe==3.0.3 2026-03-31 04:15:55.162655 | orchestrator | + netaddr==1.3.0 2026-03-31 04:15:55.162665 | orchestrator | + packaging==26.0 2026-03-31 04:15:55.162675 | orchestrator | + pycparser==3.0 2026-03-31 04:15:55.162685 | orchestrator | + pyyaml==6.0.3 2026-03-31 04:15:55.162697 | orchestrator | + resolvelib==1.0.1 2026-03-31 04:15:56.465081 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-203262o3zcs22n/tmpy8yyb27d/ansible-collection-servicesr27p1e0t'... 2026-03-31 04:15:58.070874 | orchestrator | Your branch is up to date with 'origin/main'. 2026-03-31 04:15:58.071073 | orchestrator | Already on 'main' 2026-03-31 04:15:58.607589 | orchestrator | Starting galaxy collection install process 2026-03-31 04:15:58.607691 | orchestrator | Process install dependency map 2026-03-31 04:15:58.607708 | orchestrator | Starting collection install process 2026-03-31 04:15:58.607722 | orchestrator | Installing 'osism.services:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/services' 2026-03-31 04:15:58.607735 | orchestrator | Created collection for osism.services:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/services 2026-03-31 04:15:58.607747 | orchestrator | osism.services:999.0.0 was installed successfully 2026-03-31 04:15:59.227196 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-20329957oau5i4/tmpd82k658x/ansible-playbooks-managerdfze6dfp'... 2026-03-31 04:15:59.823225 | orchestrator | Already on 'main' 2026-03-31 04:15:59.823314 | orchestrator | Your branch is up to date with 'origin/main'. 2026-03-31 04:16:00.121846 | orchestrator | Starting galaxy collection install process 2026-03-31 04:16:00.121932 | orchestrator | Process install dependency map 2026-03-31 04:16:00.121972 | orchestrator | Starting collection install process 2026-03-31 04:16:00.121983 | orchestrator | Installing 'osism.manager:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/manager' 2026-03-31 04:16:00.121992 | orchestrator | Created collection for osism.manager:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/manager 2026-03-31 04:16:00.122000 | orchestrator | osism.manager:999.0.0 was installed successfully 2026-03-31 04:16:00.934601 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2026-03-31 04:16:00.934699 | orchestrator | -vvvv to see details 2026-03-31 04:16:01.405425 | orchestrator | 2026-03-31 04:16:01.405515 | orchestrator | PLAY [Apply role manager] ****************************************************** 2026-03-31 04:16:01.405528 | orchestrator | 2026-03-31 04:16:01.405556 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-31 04:16:05.858780 | orchestrator | ok: [testbed-manager] 2026-03-31 04:16:05.858876 | orchestrator | 2026-03-31 04:16:05.858885 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-03-31 04:16:05.953129 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-03-31 04:16:05.953237 | orchestrator | 2026-03-31 04:16:05.953256 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-03-31 04:16:08.065925 | orchestrator | ok: [testbed-manager] 2026-03-31 04:16:08.066111 | orchestrator | 2026-03-31 04:16:08.066129 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-03-31 04:16:08.125466 | orchestrator | ok: [testbed-manager] 2026-03-31 04:16:08.125555 | orchestrator | 2026-03-31 04:16:08.125569 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-03-31 04:16:08.210312 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-03-31 04:16:08.210399 | orchestrator | 2026-03-31 04:16:08.210411 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-03-31 04:16:12.751484 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible) 2026-03-31 04:16:12.751569 | orchestrator | ok: [testbed-manager] => (item=/opt/archive) 2026-03-31 04:16:12.751577 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/configuration) 2026-03-31 04:16:12.751592 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/data) 2026-03-31 04:16:12.751597 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-03-31 04:16:12.751602 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/secrets) 2026-03-31 04:16:12.751607 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible/secrets) 2026-03-31 04:16:12.751612 | orchestrator | ok: [testbed-manager] => (item=/opt/state) 2026-03-31 04:16:12.751617 | orchestrator | 2026-03-31 04:16:12.751623 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-03-31 04:16:13.946983 | orchestrator | ok: [testbed-manager] 2026-03-31 04:16:13.947064 | orchestrator | 2026-03-31 04:16:13.947075 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-03-31 04:16:14.985748 | orchestrator | ok: [testbed-manager] 2026-03-31 04:16:14.985840 | orchestrator | 2026-03-31 04:16:14.985853 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-03-31 04:16:15.080270 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-03-31 04:16:15.080378 | orchestrator | 2026-03-31 04:16:15.080396 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-03-31 04:16:17.221347 | orchestrator | ok: [testbed-manager] => (item=ara) 2026-03-31 04:16:17.221481 | orchestrator | ok: [testbed-manager] => (item=ara-server) 2026-03-31 04:16:17.221508 | orchestrator | 2026-03-31 04:16:17.221529 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-03-31 04:16:18.316052 | orchestrator | ok: [testbed-manager] 2026-03-31 04:16:18.316162 | orchestrator | 2026-03-31 04:16:18.316180 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-03-31 04:16:18.407884 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:16:18.408047 | orchestrator | 2026-03-31 04:16:18.408068 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-03-31 04:16:18.500132 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-03-31 04:16:18.500258 | orchestrator | 2026-03-31 04:16:18.500284 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-03-31 04:16:19.619716 | orchestrator | ok: [testbed-manager] 2026-03-31 04:16:19.619842 | orchestrator | 2026-03-31 04:16:19.619868 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-03-31 04:16:19.694616 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-03-31 04:16:19.694750 | orchestrator | 2026-03-31 04:16:19.694778 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-03-31 04:16:21.927629 | orchestrator | ok: [testbed-manager] => (item=None) 2026-03-31 04:16:21.927726 | orchestrator | ok: [testbed-manager] => (item=None) 2026-03-31 04:16:21.927738 | orchestrator | ok: [testbed-manager] 2026-03-31 04:16:21.927750 | orchestrator | 2026-03-31 04:16:21.927759 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-03-31 04:16:22.966610 | orchestrator | ok: [testbed-manager] 2026-03-31 04:16:22.966712 | orchestrator | 2026-03-31 04:16:22.966728 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-03-31 04:16:23.046349 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:16:23.046438 | orchestrator | 2026-03-31 04:16:23.046451 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-03-31 04:16:23.179457 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-03-31 04:16:23.179549 | orchestrator | 2026-03-31 04:16:23.179564 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-03-31 04:16:23.999152 | orchestrator | ok: [testbed-manager] 2026-03-31 04:16:23.999279 | orchestrator | 2026-03-31 04:16:23.999304 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-03-31 04:16:24.605412 | orchestrator | ok: [testbed-manager] 2026-03-31 04:16:24.605527 | orchestrator | 2026-03-31 04:16:24.605569 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-03-31 04:16:26.673550 | orchestrator | ok: [testbed-manager] => (item=conductor) 2026-03-31 04:16:26.673643 | orchestrator | ok: [testbed-manager] => (item=openstack) 2026-03-31 04:16:26.673654 | orchestrator | 2026-03-31 04:16:26.673663 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-03-31 04:16:28.006320 | orchestrator | changed: [testbed-manager] 2026-03-31 04:16:28.006440 | orchestrator | 2026-03-31 04:16:28.006458 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-03-31 04:16:28.644188 | orchestrator | ok: [testbed-manager] 2026-03-31 04:16:28.644301 | orchestrator | 2026-03-31 04:16:28.644319 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-03-31 04:16:29.226795 | orchestrator | ok: [testbed-manager] 2026-03-31 04:16:29.226892 | orchestrator | 2026-03-31 04:16:29.226908 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-03-31 04:16:29.294536 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:16:29.294639 | orchestrator | 2026-03-31 04:16:29.294656 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-03-31 04:16:29.387338 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-03-31 04:16:29.387466 | orchestrator | 2026-03-31 04:16:29.387486 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-03-31 04:16:29.464335 | orchestrator | ok: [testbed-manager] 2026-03-31 04:16:29.464444 | orchestrator | 2026-03-31 04:16:29.464464 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-03-31 04:16:32.549796 | orchestrator | ok: [testbed-manager] => (item=osism) 2026-03-31 04:16:32.549907 | orchestrator | ok: [testbed-manager] => (item=osism-update-docker) 2026-03-31 04:16:32.549969 | orchestrator | ok: [testbed-manager] => (item=osism-update-manager) 2026-03-31 04:16:32.549981 | orchestrator | 2026-03-31 04:16:32.549992 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-03-31 04:16:33.634348 | orchestrator | ok: [testbed-manager] 2026-03-31 04:16:33.634445 | orchestrator | 2026-03-31 04:16:33.634461 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-03-31 04:16:34.717604 | orchestrator | ok: [testbed-manager] 2026-03-31 04:16:34.717689 | orchestrator | 2026-03-31 04:16:34.717700 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-03-31 04:16:35.798556 | orchestrator | ok: [testbed-manager] 2026-03-31 04:16:35.798638 | orchestrator | 2026-03-31 04:16:35.798651 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-03-31 04:16:35.902438 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-03-31 04:16:35.902540 | orchestrator | 2026-03-31 04:16:35.902558 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-03-31 04:16:35.961479 | orchestrator | ok: [testbed-manager] 2026-03-31 04:16:35.961576 | orchestrator | 2026-03-31 04:16:35.961592 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-03-31 04:16:37.114421 | orchestrator | ok: [testbed-manager] => (item=osism-include) 2026-03-31 04:16:37.114533 | orchestrator | 2026-03-31 04:16:37.114550 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-03-31 04:16:37.203710 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-03-31 04:16:37.203834 | orchestrator | 2026-03-31 04:16:37.203861 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-03-31 04:16:38.341250 | orchestrator | ok: [testbed-manager] 2026-03-31 04:16:38.341356 | orchestrator | 2026-03-31 04:16:38.341372 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-03-31 04:16:39.462717 | orchestrator | ok: [testbed-manager] 2026-03-31 04:16:39.462851 | orchestrator | 2026-03-31 04:16:39.462879 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-03-31 04:16:39.547002 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:16:39.547115 | orchestrator | 2026-03-31 04:16:39.547132 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-03-31 04:16:39.617544 | orchestrator | ok: [testbed-manager] 2026-03-31 04:16:39.617645 | orchestrator | 2026-03-31 04:16:39.617662 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-03-31 04:16:41.112665 | orchestrator | changed: [testbed-manager] 2026-03-31 04:16:41.112764 | orchestrator | 2026-03-31 04:16:41.112778 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-03-31 04:16:42.732782 | orchestrator | [WARNING]: Docker compose: image registry.osism.tech/osism/inventory- 2026-03-31 04:16:42.732864 | orchestrator | reconciler:0.20260322.0: Interrupted 2026-03-31 04:16:42.732875 | orchestrator | [WARNING]: Docker compose: image registry.osism.tech/osism/ceph- 2026-03-31 04:16:42.732882 | orchestrator | ansible:0.20260322.0: Interrupted 2026-03-31 04:16:42.732890 | orchestrator | [WARNING]: Docker compose: image registry.osism.tech/osism/osism- 2026-03-31 04:16:42.732896 | orchestrator | frontend:0.20260320.0: Interrupted 2026-03-31 04:16:42.732903 | orchestrator | [WARNING]: Docker compose: image 2026-03-31 04:16:42.732952 | orchestrator | registry.osism.tech/dockerhub/library/redis:7.4.7-alpine: Interrupted 2026-03-31 04:16:42.732960 | orchestrator | [WARNING]: Docker compose: image 2026-03-31 04:16:42.732967 | orchestrator | registry.osism.tech/dockerhub/library/mariadb:11.8.4: Interrupted 2026-03-31 04:16:42.732974 | orchestrator | [WARNING]: Docker compose: image registry.osism.tech/osism/osism:0.20260320.0: 2026-03-31 04:16:42.732981 | orchestrator | Interrupted 2026-03-31 04:16:42.732988 | orchestrator | [WARNING]: Docker compose: image registry.osism.tech/osism/osism- 2026-03-31 04:16:42.732994 | orchestrator | kubernetes:0.20260322.0: Interrupted 2026-03-31 04:16:42.733001 | orchestrator | [WARNING]: Docker compose: image registry.osism.tech/osism/osism- 2026-03-31 04:16:42.733007 | orchestrator | ansible:0.20260322.0: Interrupted 2026-03-31 04:16:42.738637 | orchestrator | fatal: [testbed-manager]: FAILED! => {"actions": [{"id": "registry.osism.tech/osism/ceph-ansible:0.20260322.0", "status": "Pulling", "what": "image"}, {"id": "registry.osism.tech/osism/inventory-reconciler:0.20260322.0", "status": "Pulling", "what": "image"}, {"id": "registry.osism.tech/dockerhub/library/mariadb:11.8.4", "status": "Pulling", "what": "image"}, {"id": "registry.osism.tech/osism/osism-frontend:0.20260320.0", "status": "Pulling", "what": "image"}, {"id": "registry.osism.tech/osism/kolla-ansible:0.20260328.0", "status": "Pulling", "what": "image"}, {"id": "registry.osism.tech/osism/osism-ansible:0.20260322.0", "status": "Pulling", "what": "image"}, {"id": "registry.osism.tech/osism/ara-server:1.7.3", "status": "Pulling", "what": "image"}, {"id": "registry.osism.tech/osism/osism-kubernetes:0.20260322.0", "status": "Pulling", "what": "image"}, {"id": "registry.osism.tech/osism/osism:0.20260320.0", "status": "Pulling", "what": "image"}, {"id": "registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", "status": "Pulling", "what": "image"}], "changed": false, "cmd": "/usr/bin/docker compose --ansi never --progress json --project-directory /opt/manager pull --", "msg": "Error when processing image registry.osism.tech/osism/kolla-ansible:0.20260328.0: Error\nGeneral error: Error response from daemon: unknown: artifact osism/kolla-ansible:0.20260328.0 not found", "rc": 1, "stderr": "{\"id\":\"Image registry.osism.tech/osism/ceph-ansible:0.20260322.0\",\"status\":\"Working\",\"text\":\"Pulling\"}\n{\"id\":\"Image registry.osism.tech/osism/inventory-reconciler:0.20260322.0\",\"status\":\"Working\",\"text\":\"Pulling\"}\n{\"id\":\"Image registry.osism.tech/dockerhub/library/mariadb:11.8.4\",\"status\":\"Working\",\"text\":\"Pulling\"}\n{\"id\":\"Image registry.osism.tech/osism/osism-frontend:0.20260320.0\",\"status\":\"Working\",\"text\":\"Pulling\"}\n{\"id\":\"Image registry.osism.tech/osism/kolla-ansible:0.20260328.0\",\"status\":\"Working\",\"text\":\"Pulling\"}\n{\"id\":\"Image registry.osism.tech/osism/osism-ansible:0.20260322.0\",\"status\":\"Working\",\"text\":\"Pulling\"}\n{\"id\":\"Image registry.osism.tech/osism/ara-server:1.7.3\",\"status\":\"Working\",\"text\":\"Pulling\"}\n{\"id\":\"Image registry.osism.tech/osism/osism-kubernetes:0.20260322.0\",\"status\":\"Working\",\"text\":\"Pulling\"}\n{\"id\":\"Image registry.osism.tech/osism/osism:0.20260320.0\",\"status\":\"Working\",\"text\":\"Pulling\"}\n{\"id\":\"Image registry.osism.tech/dockerhub/library/redis:7.4.7-alpine\",\"status\":\"Working\",\"text\":\"Pulling\"}\n{\"id\":\"Image registry.osism.tech/osism/kolla-ansible:0.20260328.0\",\"status\":\"Error\",\"text\":\"Error\",\"details\":\"unknown: artifact osism/kolla-ansible:0.20260328.0 not found\"}\n{\"id\":\"Image registry.osism.tech/osism/inventory-reconciler:0.20260322.0\",\"status\":\"Warning\",\"text\":\"Interrupted\"}\n{\"id\":\"Image registry.osism.tech/osism/ceph-ansible:0.20260322.0\",\"status\":\"Warning\",\"text\":\"Interrupted\"}\n{\"id\":\"Image registry.osism.tech/osism/osism-frontend:0.20260320.0\",\"status\":\"Warning\",\"text\":\"Interrupted\"}\n{\"id\":\"Image registry.osism.tech/dockerhub/library/redis:7.4.7-alpine\",\"status\":\"Warning\",\"text\":\"Interrupted\"}\n{\"id\":\"Image registry.osism.tech/dockerhub/library/mariadb:11.8.4\",\"status\":\"Warning\",\"text\":\"Interrupted\"}\n{\"id\":\"Image registry.osism.tech/osism/osism:0.20260320.0\",\"status\":\"Warning\",\"text\":\"Interrupted\"}\n{\"id\":\"Image registry.osism.tech/osism/osism-kubernetes:0.20260322.0\",\"status\":\"Warning\",\"text\":\"Interrupted\"}\n{\"id\":\"Image registry.osism.tech/osism/osism-ansible:0.20260322.0\",\"status\":\"Warning\",\"text\":\"Interrupted\"}\n{\"error\":true,\"message\":\"Error response from daemon: unknown: artifact osism/kolla-ansible:0.20260328.0 not found\"}\n", "stderr_lines": ["{\"id\":\"Image registry.osism.tech/osism/ceph-ansible:0.20260322.0\",\"status\":\"Working\",\"text\":\"Pulling\"}", "{\"id\":\"Image registry.osism.tech/osism/inventory-reconciler:0.20260322.0\",\"status\":\"Working\",\"text\":\"Pulling\"}", "{\"id\":\"Image registry.osism.tech/dockerhub/library/mariadb:11.8.4\",\"status\":\"Working\",\"text\":\"Pulling\"}", "{\"id\":\"Image registry.osism.tech/osism/osism-frontend:0.20260320.0\",\"status\":\"Working\",\"text\":\"Pulling\"}", "{\"id\":\"Image registry.osism.tech/osism/kolla-ansible:0.20260328.0\",\"status\":\"Working\",\"text\":\"Pulling\"}", "{\"id\":\"Image registry.osism.tech/osism/osism-ansible:0.20260322.0\",\"status\":\"Working\",\"text\":\"Pulling\"}", "{\"id\":\"Image registry.osism.tech/osism/ara-server:1.7.3\",\"status\":\"Working\",\"text\":\"Pulling\"}", "{\"id\":\"Image registry.osism.tech/osism/osism-kubernetes:0.20260322.0\",\"status\":\"Working\",\"text\":\"Pulling\"}", "{\"id\":\"Image registry.osism.tech/osism/osism:0.20260320.0\",\"status\":\"Working\",\"text\":\"Pulling\"}", "{\"id\":\"Image registry.osism.tech/dockerhub/library/redis:7.4.7-alpine\",\"status\":\"Working\",\"text\":\"Pulling\"}", "{\"id\":\"Image registry.osism.tech/osism/kolla-ansible:0.20260328.0\",\"status\":\"Error\",\"text\":\"Error\",\"details\":\"unknown: artifact osism/kolla-ansible:0.20260328.0 not found\"}", "{\"id\":\"Image registry.osism.tech/osism/inventory-reconciler:0.20260322.0\",\"status\":\"Warning\",\"text\":\"Interrupted\"}", "{\"id\":\"Image registry.osism.tech/osism/ceph-ansible:0.20260322.0\",\"status\":\"Warning\",\"text\":\"Interrupted\"}", "{\"id\":\"Image registry.osism.tech/osism/osism-frontend:0.20260320.0\",\"status\":\"Warning\",\"text\":\"Interrupted\"}", "{\"id\":\"Image registry.osism.tech/dockerhub/library/redis:7.4.7-alpine\",\"status\":\"Warning\",\"text\":\"Interrupted\"}", "{\"id\":\"Image registry.osism.tech/dockerhub/library/mariadb:11.8.4\",\"status\":\"Warning\",\"text\":\"Interrupted\"}", "{\"id\":\"Image registry.osism.tech/osism/osism:0.20260320.0\",\"status\":\"Warning\",\"text\":\"Interrupted\"}", "{\"id\":\"Image registry.osism.tech/osism/osism-kubernetes:0.20260322.0\",\"status\":\"Warning\",\"text\":\"Interrupted\"}", "{\"id\":\"Image registry.osism.tech/osism/osism-ansible:0.20260322.0\",\"status\":\"Warning\",\"text\":\"Interrupted\"}", "{\"error\":true,\"message\":\"Error response from daemon: unknown: artifact osism/kolla-ansible:0.20260328.0 not found\"}"], "stdout": "", "stdout_lines": []} 2026-03-31 04:16:42.738738 | orchestrator | 2026-03-31 04:16:42.738771 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 04:16:42.738780 | orchestrator | testbed-manager : ok=37 changed=2 unreachable=0 failed=1 skipped=4 rescued=0 ignored=0 2026-03-31 04:16:42.738787 | orchestrator | 2026-03-31 04:16:45.379851 | orchestrator | 2026-03-31 04:16:45 | INFO  | Task 0224ecfd-1bc4-4b92-9ec8-e113feeb600c (sync inventory) is running in background. Output coming soon. 2026-03-31 04:17:19.010665 | orchestrator | 2026-03-31 04:16:47 | INFO  | Starting group_vars file reorganization 2026-03-31 04:17:19.010746 | orchestrator | 2026-03-31 04:16:47 | INFO  | Moved 0 file(s) to their respective directories 2026-03-31 04:17:19.010753 | orchestrator | 2026-03-31 04:16:47 | INFO  | Group_vars file reorganization completed 2026-03-31 04:17:19.010757 | orchestrator | 2026-03-31 04:16:50 | INFO  | Starting variable preparation from inventory 2026-03-31 04:17:19.010762 | orchestrator | 2026-03-31 04:16:54 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-03-31 04:17:19.010766 | orchestrator | 2026-03-31 04:16:54 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-03-31 04:17:19.010771 | orchestrator | 2026-03-31 04:16:54 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-03-31 04:17:19.010775 | orchestrator | 2026-03-31 04:16:54 | INFO  | 3 file(s) written, 6 host(s) processed 2026-03-31 04:17:19.010779 | orchestrator | 2026-03-31 04:16:54 | INFO  | Variable preparation completed 2026-03-31 04:17:19.010783 | orchestrator | 2026-03-31 04:16:56 | INFO  | Starting inventory overwrite handling 2026-03-31 04:17:19.010787 | orchestrator | 2026-03-31 04:16:56 | INFO  | Handling group overwrites in 99-overwrite 2026-03-31 04:17:19.010791 | orchestrator | 2026-03-31 04:16:56 | INFO  | Removing group frr:children from 60-generic 2026-03-31 04:17:19.010795 | orchestrator | 2026-03-31 04:16:56 | INFO  | Removing group netbird:children from 50-infrastructure 2026-03-31 04:17:19.010799 | orchestrator | 2026-03-31 04:16:56 | INFO  | Removing group ceph-rgw from 50-ceph 2026-03-31 04:17:19.010804 | orchestrator | 2026-03-31 04:16:56 | INFO  | Removing group ceph-mds from 50-ceph 2026-03-31 04:17:19.010808 | orchestrator | 2026-03-31 04:16:56 | INFO  | Handling group overwrites in 20-roles 2026-03-31 04:17:19.010812 | orchestrator | 2026-03-31 04:16:56 | INFO  | Removing group k3s_node from 50-infrastructure 2026-03-31 04:17:19.010816 | orchestrator | 2026-03-31 04:16:56 | INFO  | Removed 5 group(s) in total 2026-03-31 04:17:19.010820 | orchestrator | 2026-03-31 04:16:56 | INFO  | Inventory overwrite handling completed 2026-03-31 04:17:19.010824 | orchestrator | 2026-03-31 04:16:57 | INFO  | Starting merge of inventory files 2026-03-31 04:17:19.010828 | orchestrator | 2026-03-31 04:16:57 | INFO  | Inventory files merged successfully 2026-03-31 04:17:19.010832 | orchestrator | 2026-03-31 04:17:03 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-03-31 04:17:19.010853 | orchestrator | 2026-03-31 04:17:17 | INFO  | Successfully wrote ClusterShell configuration 2026-03-31 04:17:19.472324 | orchestrator | + [[ '' == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-31 04:17:19.472451 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-31 04:17:19.472477 | orchestrator | + local max_attempts=60 2026-03-31 04:17:19.472499 | orchestrator | + local name=kolla-ansible 2026-03-31 04:17:19.472517 | orchestrator | + local attempt_num=1 2026-03-31 04:17:19.472536 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-31 04:17:19.512930 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-31 04:17:19.513076 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-31 04:17:19.513102 | orchestrator | + local max_attempts=60 2026-03-31 04:17:19.513120 | orchestrator | + local name=osism-ansible 2026-03-31 04:17:19.513138 | orchestrator | + local attempt_num=1 2026-03-31 04:17:19.514201 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-31 04:17:19.558297 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-31 04:17:19.558400 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-03-31 04:17:19.758977 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-03-31 04:17:19.759110 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible 2 hours ago Up 2 hours (healthy) 2026-03-31 04:17:19.759125 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible 2 hours ago Up 2 hours (healthy) 2026-03-31 04:17:19.759132 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api 2 hours ago Up 2 hours (healthy) 192.168.16.5:8000->8000/tcp 2026-03-31 04:17:19.759140 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 hours ago Up 2 hours (healthy) 8000/tcp 2026-03-31 04:17:19.759147 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat 2 hours ago Up 2 hours (healthy) 2026-03-31 04:17:19.759153 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower 2 hours ago Up 2 hours (healthy) 2026-03-31 04:17:19.759159 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler 2 hours ago Up 2 hours (healthy) 2026-03-31 04:17:19.759164 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener 2 hours ago Up 2 hours (healthy) 2026-03-31 04:17:19.759170 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 hours ago Up 2 hours (healthy) 3306/tcp 2026-03-31 04:17:19.759176 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack 2 hours ago Up 2 hours (healthy) 2026-03-31 04:17:19.759182 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 hours ago Up 2 hours (healthy) 6379/tcp 2026-03-31 04:17:19.759188 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible 2 hours ago Up 2 hours (healthy) 2026-03-31 04:17:19.759194 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend 2 hours ago Up 2 hours 192.168.16.5:3000->3000/tcp 2026-03-31 04:17:19.759223 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes 2 hours ago Up 2 hours (healthy) 2026-03-31 04:17:19.759230 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient 2 hours ago Up 2 hours (healthy) 2026-03-31 04:17:19.768481 | orchestrator | + [[ '' == \t\r\u\e ]] 2026-03-31 04:17:19.768593 | orchestrator | + [[ '' == \f\a\l\s\e ]] 2026-03-31 04:17:19.768608 | orchestrator | + osism apply facts 2026-03-31 04:17:32.147842 | orchestrator | 2026-03-31 04:17:32 | INFO  | Task 52525a31-34d6-409f-963d-12754d87835c (facts) was prepared for execution. 2026-03-31 04:17:32.148020 | orchestrator | 2026-03-31 04:17:32 | INFO  | It takes a moment until task 52525a31-34d6-409f-963d-12754d87835c (facts) has been started and output is visible here. 2026-03-31 04:17:48.563474 | orchestrator | 2026-03-31 04:17:48.563611 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-31 04:17:48.563643 | orchestrator | 2026-03-31 04:17:48.563664 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-31 04:17:48.563683 | orchestrator | Tuesday 31 March 2026 04:17:37 +0000 (0:00:00.305) 0:00:00.305 ********* 2026-03-31 04:17:48.563695 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:17:48.563709 | orchestrator | ok: [testbed-manager] 2026-03-31 04:17:48.563720 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:17:48.563731 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:17:48.563742 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:17:48.563753 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:17:48.563786 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:17:48.563798 | orchestrator | 2026-03-31 04:17:48.563809 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-31 04:17:48.563821 | orchestrator | Tuesday 31 March 2026 04:17:38 +0000 (0:00:01.254) 0:00:01.559 ********* 2026-03-31 04:17:48.563832 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:17:48.563845 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:17:48.563933 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:17:48.563947 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:17:48.563958 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:17:48.563969 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:17:48.563980 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:17:48.563991 | orchestrator | 2026-03-31 04:17:48.564002 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-31 04:17:48.564013 | orchestrator | 2026-03-31 04:17:48.564027 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-31 04:17:48.564040 | orchestrator | Tuesday 31 March 2026 04:17:40 +0000 (0:00:01.638) 0:00:03.198 ********* 2026-03-31 04:17:48.564053 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:17:48.564066 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:17:48.564079 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:17:48.564091 | orchestrator | ok: [testbed-manager] 2026-03-31 04:17:48.564104 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:17:48.564116 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:17:48.564127 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:17:48.564138 | orchestrator | 2026-03-31 04:17:48.564154 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-31 04:17:48.564166 | orchestrator | 2026-03-31 04:17:48.564178 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-31 04:17:48.564189 | orchestrator | Tuesday 31 March 2026 04:17:47 +0000 (0:00:07.007) 0:00:10.206 ********* 2026-03-31 04:17:48.564200 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:17:48.564211 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:17:48.564223 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:17:48.564234 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:17:48.564245 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:17:48.564280 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:17:48.564292 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:17:48.564303 | orchestrator | 2026-03-31 04:17:48.564314 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 04:17:48.564325 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 04:17:48.564338 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 04:17:48.564349 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 04:17:48.564360 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 04:17:48.564371 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 04:17:48.564382 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 04:17:48.564393 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 04:17:48.564403 | orchestrator | 2026-03-31 04:17:48.564414 | orchestrator | 2026-03-31 04:17:48.564425 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 04:17:48.564436 | orchestrator | Tuesday 31 March 2026 04:17:47 +0000 (0:00:00.627) 0:00:10.833 ********* 2026-03-31 04:17:48.564447 | orchestrator | =============================================================================== 2026-03-31 04:17:48.564458 | orchestrator | Gathers facts about hosts ----------------------------------------------- 7.01s 2026-03-31 04:17:48.564469 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.64s 2026-03-31 04:17:48.564479 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.25s 2026-03-31 04:17:48.564490 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.63s 2026-03-31 04:17:48.976824 | orchestrator | ++ semver 10.0.0 10.0.0-0 2026-03-31 04:17:49.062548 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-31 04:17:49.063426 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-03-31 04:17:49.107029 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-03-31 04:17:49.107120 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release/2024.2 2026-03-31 04:17:49.112410 | orchestrator | + set -e 2026-03-31 04:17:49.112475 | orchestrator | + NAMESPACE=kolla/release/2024.2 2026-03-31 04:17:49.112482 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release/2024.2#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-31 04:17:49.119097 | orchestrator | + sh -c /opt/configuration/scripts/upgrade-services.sh 2026-03-31 04:17:49.129336 | orchestrator | 2026-03-31 04:17:49.129418 | orchestrator | # UPGRADE SERVICES 2026-03-31 04:17:49.129429 | orchestrator | 2026-03-31 04:17:49.129437 | orchestrator | + set -e 2026-03-31 04:17:49.129444 | orchestrator | + echo 2026-03-31 04:17:49.129451 | orchestrator | + echo '# UPGRADE SERVICES' 2026-03-31 04:17:49.129458 | orchestrator | + echo 2026-03-31 04:17:49.129465 | orchestrator | + source /opt/manager-vars.sh 2026-03-31 04:17:49.129471 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-31 04:17:49.129477 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-31 04:17:49.129484 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-31 04:17:49.129490 | orchestrator | ++ CEPH_VERSION=reef 2026-03-31 04:17:49.129498 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-31 04:17:49.129506 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-31 04:17:49.129513 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-31 04:17:49.129520 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-31 04:17:49.129527 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-31 04:17:49.129535 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-31 04:17:49.129540 | orchestrator | ++ export ARA=false 2026-03-31 04:17:49.129574 | orchestrator | ++ ARA=false 2026-03-31 04:17:49.129582 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-31 04:17:49.129589 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-31 04:17:49.129596 | orchestrator | ++ export TEMPEST=false 2026-03-31 04:17:49.129602 | orchestrator | ++ TEMPEST=false 2026-03-31 04:17:49.129625 | orchestrator | ++ export IS_ZUUL=true 2026-03-31 04:17:49.129632 | orchestrator | ++ IS_ZUUL=true 2026-03-31 04:17:49.129638 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.240 2026-03-31 04:17:49.129645 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.240 2026-03-31 04:17:49.129651 | orchestrator | ++ export EXTERNAL_API=false 2026-03-31 04:17:49.129657 | orchestrator | ++ EXTERNAL_API=false 2026-03-31 04:17:49.129663 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-31 04:17:49.129670 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-31 04:17:49.129676 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-31 04:17:49.129682 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-31 04:17:49.129688 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-31 04:17:49.129694 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-31 04:17:49.129701 | orchestrator | ++ export RABBITMQ3TO4=true 2026-03-31 04:17:49.129707 | orchestrator | ++ RABBITMQ3TO4=true 2026-03-31 04:17:49.129712 | orchestrator | + SKIP_OPENSTACK_UPGRADE=false 2026-03-31 04:17:49.129719 | orchestrator | + SKIP_CEPH_UPGRADE=false 2026-03-31 04:17:49.129725 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-03-31 04:17:49.136801 | orchestrator | + set -e 2026-03-31 04:17:49.136898 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-31 04:17:49.137510 | orchestrator | ++ export INTERACTIVE=false 2026-03-31 04:17:49.137529 | orchestrator | ++ INTERACTIVE=false 2026-03-31 04:17:49.137537 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-31 04:17:49.137927 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-31 04:17:49.137991 | orchestrator | + source /opt/manager-vars.sh 2026-03-31 04:17:49.138000 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-31 04:17:49.138006 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-31 04:17:49.138141 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-31 04:17:49.138150 | orchestrator | ++ CEPH_VERSION=reef 2026-03-31 04:17:49.138154 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-31 04:17:49.138160 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-31 04:17:49.138318 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-31 04:17:49.138328 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-31 04:17:49.138334 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-31 04:17:49.138340 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-31 04:17:49.138346 | orchestrator | ++ export ARA=false 2026-03-31 04:17:49.138352 | orchestrator | ++ ARA=false 2026-03-31 04:17:49.138358 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-31 04:17:49.138551 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-31 04:17:49.138565 | orchestrator | ++ export TEMPEST=false 2026-03-31 04:17:49.138573 | orchestrator | ++ TEMPEST=false 2026-03-31 04:17:49.138580 | orchestrator | ++ export IS_ZUUL=true 2026-03-31 04:17:49.138587 | orchestrator | ++ IS_ZUUL=true 2026-03-31 04:17:49.138594 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.240 2026-03-31 04:17:49.138601 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.240 2026-03-31 04:17:49.138607 | orchestrator | ++ export EXTERNAL_API=false 2026-03-31 04:17:49.138747 | orchestrator | ++ EXTERNAL_API=false 2026-03-31 04:17:49.138757 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-31 04:17:49.138764 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-31 04:17:49.138770 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-31 04:17:49.138775 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-31 04:17:49.138779 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-31 04:17:49.139426 | orchestrator | 2026-03-31 04:17:49.139456 | orchestrator | # PULL IMAGES 2026-03-31 04:17:49.139461 | orchestrator | 2026-03-31 04:17:49.139465 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-31 04:17:49.139469 | orchestrator | ++ export RABBITMQ3TO4=true 2026-03-31 04:17:49.139473 | orchestrator | ++ RABBITMQ3TO4=true 2026-03-31 04:17:49.139478 | orchestrator | + echo 2026-03-31 04:17:49.139482 | orchestrator | + echo '# PULL IMAGES' 2026-03-31 04:17:49.139486 | orchestrator | + echo 2026-03-31 04:17:49.139522 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-31 04:17:49.202788 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-31 04:17:49.202885 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-03-31 04:17:51.401936 | orchestrator | 2026-03-31 04:17:51 | INFO  | Trying to run play pull-images in environment custom 2026-03-31 04:18:01.615208 | orchestrator | 2026-03-31 04:18:01 | INFO  | Task 973647d1-782a-4e2e-b36c-02d05910353c (pull-images) was prepared for execution. 2026-03-31 04:18:01.615290 | orchestrator | 2026-03-31 04:18:01 | INFO  | Task 973647d1-782a-4e2e-b36c-02d05910353c is running in background. No more output. Check ARA for logs. 2026-03-31 04:18:02.080740 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/500-kubernetes.sh 2026-03-31 04:18:02.087959 | orchestrator | + set -e 2026-03-31 04:18:02.088057 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-31 04:18:02.088074 | orchestrator | ++ export INTERACTIVE=false 2026-03-31 04:18:02.088087 | orchestrator | ++ INTERACTIVE=false 2026-03-31 04:18:02.088099 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-31 04:18:02.088110 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-31 04:18:02.088122 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-31 04:18:02.089457 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-31 04:18:02.094211 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-03-31 04:18:02.094280 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-03-31 04:18:02.094947 | orchestrator | ++ semver 10.0.0 8.0.3 2026-03-31 04:18:02.159882 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-31 04:18:02.159972 | orchestrator | + osism apply frr 2026-03-31 04:18:14.529151 | orchestrator | 2026-03-31 04:18:14 | INFO  | Task b3095a6f-7fc3-4397-b164-988ed8170dfb (frr) was prepared for execution. 2026-03-31 04:18:14.529263 | orchestrator | 2026-03-31 04:18:14 | INFO  | It takes a moment until task b3095a6f-7fc3-4397-b164-988ed8170dfb (frr) has been started and output is visible here. 2026-03-31 04:18:34.283018 | orchestrator | 2026-03-31 04:18:34.283156 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-03-31 04:18:34.283180 | orchestrator | 2026-03-31 04:18:34.283197 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-03-31 04:18:34.283213 | orchestrator | Tuesday 31 March 2026 04:18:19 +0000 (0:00:00.327) 0:00:00.327 ********* 2026-03-31 04:18:34.283231 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-03-31 04:18:34.283250 | orchestrator | 2026-03-31 04:18:34.283267 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-03-31 04:18:34.283285 | orchestrator | Tuesday 31 March 2026 04:18:19 +0000 (0:00:00.258) 0:00:00.585 ********* 2026-03-31 04:18:34.283304 | orchestrator | ok: [testbed-manager] 2026-03-31 04:18:34.283323 | orchestrator | 2026-03-31 04:18:34.283341 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-03-31 04:18:34.283359 | orchestrator | Tuesday 31 March 2026 04:18:21 +0000 (0:00:01.521) 0:00:02.106 ********* 2026-03-31 04:18:34.283376 | orchestrator | ok: [testbed-manager] 2026-03-31 04:18:34.283393 | orchestrator | 2026-03-31 04:18:34.283410 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-03-31 04:18:34.283428 | orchestrator | Tuesday 31 March 2026 04:18:24 +0000 (0:00:02.935) 0:00:05.042 ********* 2026-03-31 04:18:34.283445 | orchestrator | ok: [testbed-manager] 2026-03-31 04:18:34.283462 | orchestrator | 2026-03-31 04:18:34.283481 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-03-31 04:18:34.283499 | orchestrator | Tuesday 31 March 2026 04:18:25 +0000 (0:00:00.982) 0:00:06.025 ********* 2026-03-31 04:18:34.283517 | orchestrator | ok: [testbed-manager] 2026-03-31 04:18:34.283535 | orchestrator | 2026-03-31 04:18:34.283554 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-03-31 04:18:34.283573 | orchestrator | Tuesday 31 March 2026 04:18:26 +0000 (0:00:00.990) 0:00:07.016 ********* 2026-03-31 04:18:34.283591 | orchestrator | ok: [testbed-manager] 2026-03-31 04:18:34.283611 | orchestrator | 2026-03-31 04:18:34.283631 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-03-31 04:18:34.283652 | orchestrator | Tuesday 31 March 2026 04:18:27 +0000 (0:00:01.621) 0:00:08.638 ********* 2026-03-31 04:18:34.283669 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:18:34.283686 | orchestrator | 2026-03-31 04:18:34.283704 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-03-31 04:18:34.283722 | orchestrator | Tuesday 31 March 2026 04:18:27 +0000 (0:00:00.165) 0:00:08.804 ********* 2026-03-31 04:18:34.283791 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:18:34.283810 | orchestrator | 2026-03-31 04:18:34.283855 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-03-31 04:18:34.283874 | orchestrator | Tuesday 31 March 2026 04:18:28 +0000 (0:00:00.179) 0:00:08.983 ********* 2026-03-31 04:18:34.283892 | orchestrator | ok: [testbed-manager] 2026-03-31 04:18:34.283910 | orchestrator | 2026-03-31 04:18:34.283927 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-03-31 04:18:34.283945 | orchestrator | Tuesday 31 March 2026 04:18:29 +0000 (0:00:00.980) 0:00:09.964 ********* 2026-03-31 04:18:34.283963 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-03-31 04:18:34.283980 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-03-31 04:18:34.284000 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-03-31 04:18:34.284018 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-03-31 04:18:34.284036 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-03-31 04:18:34.284054 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-03-31 04:18:34.284073 | orchestrator | 2026-03-31 04:18:34.284091 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-03-31 04:18:34.284109 | orchestrator | Tuesday 31 March 2026 04:18:31 +0000 (0:00:02.690) 0:00:12.654 ********* 2026-03-31 04:18:34.284127 | orchestrator | ok: [testbed-manager] 2026-03-31 04:18:34.284144 | orchestrator | 2026-03-31 04:18:34.284162 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 04:18:34.284179 | orchestrator | testbed-manager : ok=9  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 04:18:34.284197 | orchestrator | 2026-03-31 04:18:34.284215 | orchestrator | 2026-03-31 04:18:34.284233 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 04:18:34.284251 | orchestrator | Tuesday 31 March 2026 04:18:33 +0000 (0:00:02.081) 0:00:14.736 ********* 2026-03-31 04:18:34.284270 | orchestrator | =============================================================================== 2026-03-31 04:18:34.284288 | orchestrator | osism.services.frr : Install frr package -------------------------------- 2.94s 2026-03-31 04:18:34.284306 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.69s 2026-03-31 04:18:34.284325 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 2.08s 2026-03-31 04:18:34.284343 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.62s 2026-03-31 04:18:34.284361 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.52s 2026-03-31 04:18:34.284378 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.99s 2026-03-31 04:18:34.284396 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 0.98s 2026-03-31 04:18:34.284477 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.98s 2026-03-31 04:18:34.284520 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.26s 2026-03-31 04:18:34.284539 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.18s 2026-03-31 04:18:34.284557 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.17s 2026-03-31 04:18:34.885206 | orchestrator | + osism apply kubernetes 2026-03-31 04:18:37.486881 | orchestrator | 2026-03-31 04:18:37 | INFO  | Task c2defd69-3c35-484b-9664-0f46f3b04f9f (kubernetes) was prepared for execution. 2026-03-31 04:18:37.486951 | orchestrator | 2026-03-31 04:18:37 | INFO  | It takes a moment until task c2defd69-3c35-484b-9664-0f46f3b04f9f (kubernetes) has been started and output is visible here. 2026-03-31 04:19:06.401108 | orchestrator | 2026-03-31 04:19:06.401237 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-03-31 04:19:06.401254 | orchestrator | 2026-03-31 04:19:06.401265 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-03-31 04:19:06.401276 | orchestrator | Tuesday 31 March 2026 04:18:43 +0000 (0:00:00.194) 0:00:00.194 ********* 2026-03-31 04:19:06.401287 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:19:06.401299 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:19:06.401308 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:19:06.401318 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:19:06.401328 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:19:06.401337 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:19:06.401347 | orchestrator | 2026-03-31 04:19:06.401357 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-03-31 04:19:06.401367 | orchestrator | Tuesday 31 March 2026 04:18:44 +0000 (0:00:01.211) 0:00:01.406 ********* 2026-03-31 04:19:06.401377 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:19:06.401388 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:19:06.401397 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:19:06.401407 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:19:06.401417 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:19:06.401448 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:19:06.401458 | orchestrator | 2026-03-31 04:19:06.401468 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-03-31 04:19:06.401478 | orchestrator | Tuesday 31 March 2026 04:18:45 +0000 (0:00:01.225) 0:00:02.631 ********* 2026-03-31 04:19:06.401489 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:19:06.401506 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:19:06.401523 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:19:06.401540 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:19:06.401556 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:19:06.401566 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:19:06.401576 | orchestrator | 2026-03-31 04:19:06.401588 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-03-31 04:19:06.401599 | orchestrator | Tuesday 31 March 2026 04:18:46 +0000 (0:00:01.180) 0:00:03.812 ********* 2026-03-31 04:19:06.401611 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:19:06.401622 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:19:06.401633 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:19:06.401644 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:19:06.401655 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:19:06.401667 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:19:06.401679 | orchestrator | 2026-03-31 04:19:06.401691 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-03-31 04:19:06.401703 | orchestrator | Tuesday 31 March 2026 04:18:48 +0000 (0:00:02.071) 0:00:05.883 ********* 2026-03-31 04:19:06.401714 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:19:06.401725 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:19:06.401737 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:19:06.401749 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:19:06.401760 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:19:06.401771 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:19:06.401783 | orchestrator | 2026-03-31 04:19:06.401819 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-03-31 04:19:06.401837 | orchestrator | Tuesday 31 March 2026 04:18:51 +0000 (0:00:02.227) 0:00:08.111 ********* 2026-03-31 04:19:06.401848 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:19:06.401860 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:19:06.401871 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:19:06.401882 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:19:06.401893 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:19:06.401905 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:19:06.401916 | orchestrator | 2026-03-31 04:19:06.401940 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-03-31 04:19:06.401951 | orchestrator | Tuesday 31 March 2026 04:18:53 +0000 (0:00:02.123) 0:00:10.234 ********* 2026-03-31 04:19:06.401983 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:19:06.401993 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:19:06.402002 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:19:06.402012 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:19:06.402098 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:19:06.402122 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:19:06.402132 | orchestrator | 2026-03-31 04:19:06.402142 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-03-31 04:19:06.402153 | orchestrator | Tuesday 31 March 2026 04:18:54 +0000 (0:00:01.058) 0:00:11.292 ********* 2026-03-31 04:19:06.402163 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:19:06.402172 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:19:06.402182 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:19:06.402192 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:19:06.402202 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:19:06.402212 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:19:06.402222 | orchestrator | 2026-03-31 04:19:06.402231 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-03-31 04:19:06.402241 | orchestrator | Tuesday 31 March 2026 04:18:55 +0000 (0:00:01.205) 0:00:12.498 ********* 2026-03-31 04:19:06.402251 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-31 04:19:06.402261 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-31 04:19:06.402271 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:19:06.402281 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-31 04:19:06.402291 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-31 04:19:06.402306 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:19:06.402316 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-31 04:19:06.402326 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-31 04:19:06.402335 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:19:06.402345 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-31 04:19:06.402355 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-31 04:19:06.402366 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:19:06.402407 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-31 04:19:06.402425 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-31 04:19:06.402442 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:19:06.402458 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-31 04:19:06.402473 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-31 04:19:06.402490 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:19:06.402505 | orchestrator | 2026-03-31 04:19:06.402518 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-03-31 04:19:06.402533 | orchestrator | Tuesday 31 March 2026 04:18:56 +0000 (0:00:00.777) 0:00:13.276 ********* 2026-03-31 04:19:06.402569 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:19:06.402586 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:19:06.402602 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:19:06.402618 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:19:06.402636 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:19:06.402652 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:19:06.402668 | orchestrator | 2026-03-31 04:19:06.402678 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-03-31 04:19:06.402689 | orchestrator | Tuesday 31 March 2026 04:18:57 +0000 (0:00:01.762) 0:00:15.039 ********* 2026-03-31 04:19:06.402699 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:19:06.402720 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:19:06.402735 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:19:06.402755 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:19:06.402777 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:19:06.402792 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:19:06.402901 | orchestrator | 2026-03-31 04:19:06.402917 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-03-31 04:19:06.402931 | orchestrator | Tuesday 31 March 2026 04:18:59 +0000 (0:00:01.049) 0:00:16.088 ********* 2026-03-31 04:19:06.402944 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:19:06.402959 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:19:06.402975 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:19:06.402989 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:19:06.403004 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:19:06.403019 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:19:06.403033 | orchestrator | 2026-03-31 04:19:06.403049 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-03-31 04:19:06.403062 | orchestrator | Tuesday 31 March 2026 04:19:01 +0000 (0:00:02.035) 0:00:18.124 ********* 2026-03-31 04:19:06.403078 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:19:06.403094 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:19:06.403109 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:19:06.403140 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:19:06.403159 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:19:06.403174 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:19:06.403192 | orchestrator | 2026-03-31 04:19:06.403209 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-03-31 04:19:06.403225 | orchestrator | Tuesday 31 March 2026 04:19:02 +0000 (0:00:01.144) 0:00:19.268 ********* 2026-03-31 04:19:06.403241 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:19:06.403264 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:19:06.403274 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:19:06.403284 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:19:06.403293 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:19:06.403303 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:19:06.403313 | orchestrator | 2026-03-31 04:19:06.403323 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-03-31 04:19:06.403335 | orchestrator | Tuesday 31 March 2026 04:19:03 +0000 (0:00:01.753) 0:00:21.021 ********* 2026-03-31 04:19:06.403345 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:19:06.403354 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:19:06.403365 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:19:06.403383 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:19:06.403402 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:19:06.403415 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:19:06.403428 | orchestrator | 2026-03-31 04:19:06.403440 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-03-31 04:19:06.403452 | orchestrator | Tuesday 31 March 2026 04:19:04 +0000 (0:00:00.807) 0:00:21.828 ********* 2026-03-31 04:19:06.403464 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-03-31 04:19:06.403477 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-03-31 04:19:06.403490 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:19:06.403503 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-03-31 04:19:06.403515 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-03-31 04:19:06.403526 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:19:06.403538 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-03-31 04:19:06.403551 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-03-31 04:19:06.403564 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:19:06.403578 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-03-31 04:19:06.403591 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-03-31 04:19:06.403620 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:19:06.403634 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-03-31 04:19:06.403649 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-03-31 04:19:06.403662 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:19:06.403675 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-03-31 04:19:06.403690 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-03-31 04:19:06.403698 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:19:06.403708 | orchestrator | 2026-03-31 04:19:06.403722 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-03-31 04:19:06.403753 | orchestrator | Tuesday 31 March 2026 04:19:05 +0000 (0:00:01.123) 0:00:22.952 ********* 2026-03-31 04:19:06.403767 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:19:06.403780 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:19:06.403834 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:20:32.551441 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:20:32.551582 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:20:32.551595 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:20:32.551605 | orchestrator | 2026-03-31 04:20:32.551618 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-03-31 04:20:32.551632 | orchestrator | Tuesday 31 March 2026 04:19:06 +0000 (0:00:01.095) 0:00:24.048 ********* 2026-03-31 04:20:32.551644 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:20:32.551654 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:20:32.551665 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:20:32.551676 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:20:32.551687 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:20:32.551697 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:20:32.551708 | orchestrator | 2026-03-31 04:20:32.551719 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-03-31 04:20:32.551794 | orchestrator | 2026-03-31 04:20:32.551804 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-03-31 04:20:32.551813 | orchestrator | Tuesday 31 March 2026 04:19:08 +0000 (0:00:01.637) 0:00:25.685 ********* 2026-03-31 04:20:32.551820 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:20:32.551828 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:20:32.551835 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:20:32.551841 | orchestrator | 2026-03-31 04:20:32.551848 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-03-31 04:20:32.551854 | orchestrator | Tuesday 31 March 2026 04:19:09 +0000 (0:00:01.290) 0:00:26.976 ********* 2026-03-31 04:20:32.551861 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:20:32.551867 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:20:32.551873 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:20:32.551880 | orchestrator | 2026-03-31 04:20:32.551886 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-03-31 04:20:32.551893 | orchestrator | Tuesday 31 March 2026 04:19:11 +0000 (0:00:01.451) 0:00:28.427 ********* 2026-03-31 04:20:32.551900 | orchestrator | changed: [testbed-node-0] 2026-03-31 04:20:32.551907 | orchestrator | changed: [testbed-node-1] 2026-03-31 04:20:32.551913 | orchestrator | changed: [testbed-node-2] 2026-03-31 04:20:32.551920 | orchestrator | 2026-03-31 04:20:32.551926 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-03-31 04:20:32.551934 | orchestrator | Tuesday 31 March 2026 04:19:12 +0000 (0:00:01.201) 0:00:29.628 ********* 2026-03-31 04:20:32.551941 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:20:32.551948 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:20:32.551956 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:20:32.551963 | orchestrator | 2026-03-31 04:20:32.551970 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-03-31 04:20:32.551978 | orchestrator | Tuesday 31 March 2026 04:19:14 +0000 (0:00:01.578) 0:00:31.207 ********* 2026-03-31 04:20:32.551985 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:20:32.552016 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:20:32.552024 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:20:32.552031 | orchestrator | 2026-03-31 04:20:32.552039 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-03-31 04:20:32.552046 | orchestrator | Tuesday 31 March 2026 04:19:14 +0000 (0:00:00.576) 0:00:31.784 ********* 2026-03-31 04:20:32.552053 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:20:32.552060 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:20:32.552068 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:20:32.552075 | orchestrator | 2026-03-31 04:20:32.552081 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-03-31 04:20:32.552087 | orchestrator | Tuesday 31 March 2026 04:19:16 +0000 (0:00:01.451) 0:00:33.235 ********* 2026-03-31 04:20:32.552094 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:20:32.552100 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:20:32.552106 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:20:32.552112 | orchestrator | 2026-03-31 04:20:32.552119 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-03-31 04:20:32.552125 | orchestrator | Tuesday 31 March 2026 04:19:18 +0000 (0:00:01.818) 0:00:35.054 ********* 2026-03-31 04:20:32.552132 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 04:20:32.552139 | orchestrator | 2026-03-31 04:20:32.552150 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-03-31 04:20:32.552161 | orchestrator | Tuesday 31 March 2026 04:19:18 +0000 (0:00:00.653) 0:00:35.708 ********* 2026-03-31 04:20:32.552171 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:20:32.552181 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:20:32.552193 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:20:32.552202 | orchestrator | 2026-03-31 04:20:32.552208 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-03-31 04:20:32.552215 | orchestrator | Tuesday 31 March 2026 04:19:21 +0000 (0:00:02.524) 0:00:38.232 ********* 2026-03-31 04:20:32.552221 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:20:32.552227 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:20:32.552233 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:20:32.552240 | orchestrator | 2026-03-31 04:20:32.552246 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-03-31 04:20:32.552252 | orchestrator | Tuesday 31 March 2026 04:19:22 +0000 (0:00:00.968) 0:00:39.201 ********* 2026-03-31 04:20:32.552258 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:20:32.552265 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:20:32.552271 | orchestrator | changed: [testbed-node-0] 2026-03-31 04:20:32.552277 | orchestrator | 2026-03-31 04:20:32.552284 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-03-31 04:20:32.552290 | orchestrator | Tuesday 31 March 2026 04:19:23 +0000 (0:00:01.038) 0:00:40.240 ********* 2026-03-31 04:20:32.552297 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:20:32.552303 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:20:32.552309 | orchestrator | changed: [testbed-node-0] 2026-03-31 04:20:32.552316 | orchestrator | 2026-03-31 04:20:32.552322 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-03-31 04:20:32.552328 | orchestrator | Tuesday 31 March 2026 04:19:24 +0000 (0:00:01.676) 0:00:41.916 ********* 2026-03-31 04:20:32.552334 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:20:32.552341 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:20:32.552364 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:20:32.552370 | orchestrator | 2026-03-31 04:20:32.552377 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-03-31 04:20:32.552383 | orchestrator | Tuesday 31 March 2026 04:19:25 +0000 (0:00:00.527) 0:00:42.444 ********* 2026-03-31 04:20:32.552389 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:20:32.552395 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:20:32.552402 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:20:32.552414 | orchestrator | 2026-03-31 04:20:32.552420 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-03-31 04:20:32.552427 | orchestrator | Tuesday 31 March 2026 04:19:26 +0000 (0:00:00.865) 0:00:43.310 ********* 2026-03-31 04:20:32.552433 | orchestrator | changed: [testbed-node-0] 2026-03-31 04:20:32.552439 | orchestrator | changed: [testbed-node-1] 2026-03-31 04:20:32.552445 | orchestrator | changed: [testbed-node-2] 2026-03-31 04:20:32.552452 | orchestrator | 2026-03-31 04:20:32.552458 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-03-31 04:20:32.552464 | orchestrator | Tuesday 31 March 2026 04:19:28 +0000 (0:00:01.875) 0:00:45.185 ********* 2026-03-31 04:20:32.552470 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:20:32.552477 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:20:32.552483 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:20:32.552489 | orchestrator | 2026-03-31 04:20:32.552495 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-03-31 04:20:32.552502 | orchestrator | Tuesday 31 March 2026 04:19:29 +0000 (0:00:01.013) 0:00:46.198 ********* 2026-03-31 04:20:32.552508 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:20:32.552514 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:20:32.552520 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:20:32.552527 | orchestrator | 2026-03-31 04:20:32.552533 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-03-31 04:20:32.552539 | orchestrator | Tuesday 31 March 2026 04:19:30 +0000 (0:00:01.186) 0:00:47.385 ********* 2026-03-31 04:20:32.552546 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-31 04:20:32.552559 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-31 04:20:32.552566 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-31 04:20:32.552572 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-31 04:20:32.552578 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-31 04:20:32.552584 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-31 04:20:32.552591 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-31 04:20:32.552597 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-31 04:20:32.552610 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-31 04:20:32.552617 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:20:32.552623 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:20:32.552630 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:20:32.552636 | orchestrator | 2026-03-31 04:20:32.552642 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-03-31 04:20:32.552649 | orchestrator | Tuesday 31 March 2026 04:20:03 +0000 (0:00:33.613) 0:01:20.999 ********* 2026-03-31 04:20:32.552655 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:20:32.552661 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:20:32.552667 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:20:32.552674 | orchestrator | 2026-03-31 04:20:32.552680 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-03-31 04:20:32.552686 | orchestrator | Tuesday 31 March 2026 04:20:04 +0000 (0:00:00.386) 0:01:21.386 ********* 2026-03-31 04:20:32.552693 | orchestrator | changed: [testbed-node-0] 2026-03-31 04:20:32.552699 | orchestrator | changed: [testbed-node-1] 2026-03-31 04:20:32.552711 | orchestrator | changed: [testbed-node-2] 2026-03-31 04:20:32.552717 | orchestrator | 2026-03-31 04:20:32.552724 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-03-31 04:20:32.552749 | orchestrator | Tuesday 31 March 2026 04:20:05 +0000 (0:00:01.067) 0:01:22.453 ********* 2026-03-31 04:20:32.552759 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:20:32.552765 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:20:32.552771 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:20:32.552777 | orchestrator | 2026-03-31 04:20:32.552784 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-03-31 04:20:32.552794 | orchestrator | Tuesday 31 March 2026 04:20:07 +0000 (0:00:01.727) 0:01:24.180 ********* 2026-03-31 04:20:32.552800 | orchestrator | changed: [testbed-node-1] 2026-03-31 04:20:32.552807 | orchestrator | changed: [testbed-node-0] 2026-03-31 04:20:32.552813 | orchestrator | changed: [testbed-node-2] 2026-03-31 04:20:32.552819 | orchestrator | 2026-03-31 04:20:32.552825 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-03-31 04:20:32.552832 | orchestrator | Tuesday 31 March 2026 04:20:31 +0000 (0:00:24.699) 0:01:48.880 ********* 2026-03-31 04:20:32.552838 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:20:32.552844 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:20:32.552850 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:20:32.552857 | orchestrator | 2026-03-31 04:20:32.552863 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-03-31 04:20:32.552874 | orchestrator | Tuesday 31 March 2026 04:20:32 +0000 (0:00:00.706) 0:01:49.587 ********* 2026-03-31 04:20:59.731177 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:20:59.732052 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:20:59.732079 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:20:59.732086 | orchestrator | 2026-03-31 04:20:59.732094 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-03-31 04:20:59.732102 | orchestrator | Tuesday 31 March 2026 04:20:33 +0000 (0:00:00.773) 0:01:50.361 ********* 2026-03-31 04:20:59.732109 | orchestrator | changed: [testbed-node-0] 2026-03-31 04:20:59.732116 | orchestrator | changed: [testbed-node-1] 2026-03-31 04:20:59.732122 | orchestrator | changed: [testbed-node-2] 2026-03-31 04:20:59.732127 | orchestrator | 2026-03-31 04:20:59.732133 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-03-31 04:20:59.732139 | orchestrator | Tuesday 31 March 2026 04:20:34 +0000 (0:00:01.126) 0:01:51.487 ********* 2026-03-31 04:20:59.732145 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:20:59.732150 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:20:59.732156 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:20:59.732161 | orchestrator | 2026-03-31 04:20:59.732167 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-03-31 04:20:59.732172 | orchestrator | Tuesday 31 March 2026 04:20:35 +0000 (0:00:00.712) 0:01:52.199 ********* 2026-03-31 04:20:59.732178 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:20:59.732183 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:20:59.732189 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:20:59.732194 | orchestrator | 2026-03-31 04:20:59.732200 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-03-31 04:20:59.732205 | orchestrator | Tuesday 31 March 2026 04:20:35 +0000 (0:00:00.399) 0:01:52.598 ********* 2026-03-31 04:20:59.732211 | orchestrator | changed: [testbed-node-0] 2026-03-31 04:20:59.732216 | orchestrator | changed: [testbed-node-1] 2026-03-31 04:20:59.732222 | orchestrator | changed: [testbed-node-2] 2026-03-31 04:20:59.732228 | orchestrator | 2026-03-31 04:20:59.732233 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-03-31 04:20:59.732239 | orchestrator | Tuesday 31 March 2026 04:20:36 +0000 (0:00:00.998) 0:01:53.597 ********* 2026-03-31 04:20:59.732244 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:20:59.732250 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:20:59.732255 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:20:59.732261 | orchestrator | 2026-03-31 04:20:59.732283 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-03-31 04:20:59.732290 | orchestrator | Tuesday 31 March 2026 04:20:37 +0000 (0:00:00.747) 0:01:54.345 ********* 2026-03-31 04:20:59.732295 | orchestrator | changed: [testbed-node-0] 2026-03-31 04:20:59.732301 | orchestrator | changed: [testbed-node-1] 2026-03-31 04:20:59.732306 | orchestrator | changed: [testbed-node-2] 2026-03-31 04:20:59.732312 | orchestrator | 2026-03-31 04:20:59.732317 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-03-31 04:20:59.732323 | orchestrator | Tuesday 31 March 2026 04:20:38 +0000 (0:00:00.990) 0:01:55.335 ********* 2026-03-31 04:20:59.732329 | orchestrator | changed: [testbed-node-0] 2026-03-31 04:20:59.732334 | orchestrator | changed: [testbed-node-1] 2026-03-31 04:20:59.732339 | orchestrator | changed: [testbed-node-2] 2026-03-31 04:20:59.732345 | orchestrator | 2026-03-31 04:20:59.732350 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-03-31 04:20:59.732356 | orchestrator | Tuesday 31 March 2026 04:20:39 +0000 (0:00:01.005) 0:01:56.341 ********* 2026-03-31 04:20:59.732361 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:20:59.732367 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:20:59.732372 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:20:59.732378 | orchestrator | 2026-03-31 04:20:59.732383 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-03-31 04:20:59.732388 | orchestrator | Tuesday 31 March 2026 04:20:39 +0000 (0:00:00.607) 0:01:56.949 ********* 2026-03-31 04:20:59.732394 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:20:59.732399 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:20:59.732405 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:20:59.732410 | orchestrator | 2026-03-31 04:20:59.732416 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-03-31 04:20:59.732421 | orchestrator | Tuesday 31 March 2026 04:20:40 +0000 (0:00:00.392) 0:01:57.341 ********* 2026-03-31 04:20:59.732426 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:20:59.732432 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:20:59.732437 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:20:59.732443 | orchestrator | 2026-03-31 04:20:59.732448 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-03-31 04:20:59.732454 | orchestrator | Tuesday 31 March 2026 04:20:40 +0000 (0:00:00.691) 0:01:58.033 ********* 2026-03-31 04:20:59.732459 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:20:59.732465 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:20:59.732470 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:20:59.732475 | orchestrator | 2026-03-31 04:20:59.732481 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-03-31 04:20:59.732488 | orchestrator | Tuesday 31 March 2026 04:20:41 +0000 (0:00:00.722) 0:01:58.755 ********* 2026-03-31 04:20:59.732493 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-31 04:20:59.732499 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-31 04:20:59.732505 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-31 04:20:59.732511 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-31 04:20:59.732516 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-31 04:20:59.732522 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-31 04:20:59.732528 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-31 04:20:59.732533 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-03-31 04:20:59.732556 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-31 04:20:59.732566 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-31 04:20:59.732572 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-03-31 04:20:59.732578 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-31 04:20:59.732583 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-31 04:20:59.732588 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-31 04:20:59.732594 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-31 04:20:59.732599 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-31 04:20:59.732605 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-31 04:20:59.732610 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-31 04:20:59.732616 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-31 04:20:59.732621 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-31 04:20:59.732627 | orchestrator | 2026-03-31 04:20:59.732632 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-03-31 04:20:59.732637 | orchestrator | 2026-03-31 04:20:59.732643 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-03-31 04:20:59.732648 | orchestrator | Tuesday 31 March 2026 04:20:45 +0000 (0:00:03.374) 0:02:02.130 ********* 2026-03-31 04:20:59.732654 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:20:59.732659 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:20:59.732665 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:20:59.732670 | orchestrator | 2026-03-31 04:20:59.732675 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-03-31 04:20:59.732681 | orchestrator | Tuesday 31 March 2026 04:20:45 +0000 (0:00:00.410) 0:02:02.541 ********* 2026-03-31 04:20:59.732686 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:20:59.732692 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:20:59.732753 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:20:59.732762 | orchestrator | 2026-03-31 04:20:59.732767 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-03-31 04:20:59.732773 | orchestrator | Tuesday 31 March 2026 04:20:46 +0000 (0:00:00.958) 0:02:03.499 ********* 2026-03-31 04:20:59.732779 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:20:59.732784 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:20:59.732789 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:20:59.732795 | orchestrator | 2026-03-31 04:20:59.732800 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-03-31 04:20:59.732806 | orchestrator | Tuesday 31 March 2026 04:20:46 +0000 (0:00:00.448) 0:02:03.948 ********* 2026-03-31 04:20:59.732811 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 04:20:59.732817 | orchestrator | 2026-03-31 04:20:59.732822 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-03-31 04:20:59.732828 | orchestrator | Tuesday 31 March 2026 04:20:47 +0000 (0:00:00.594) 0:02:04.542 ********* 2026-03-31 04:20:59.732833 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:20:59.732839 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:20:59.732844 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:20:59.732849 | orchestrator | 2026-03-31 04:20:59.732855 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-03-31 04:20:59.732860 | orchestrator | Tuesday 31 March 2026 04:20:48 +0000 (0:00:00.669) 0:02:05.212 ********* 2026-03-31 04:20:59.732866 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:20:59.732871 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:20:59.732877 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:20:59.732886 | orchestrator | 2026-03-31 04:20:59.732892 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-03-31 04:20:59.732897 | orchestrator | Tuesday 31 March 2026 04:20:48 +0000 (0:00:00.430) 0:02:05.642 ********* 2026-03-31 04:20:59.732903 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:20:59.732908 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:20:59.732914 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:20:59.732919 | orchestrator | 2026-03-31 04:20:59.732924 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-03-31 04:20:59.732930 | orchestrator | Tuesday 31 March 2026 04:20:48 +0000 (0:00:00.400) 0:02:06.042 ********* 2026-03-31 04:20:59.732935 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:20:59.732941 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:20:59.732946 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:20:59.732952 | orchestrator | 2026-03-31 04:20:59.732957 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-03-31 04:20:59.732966 | orchestrator | Tuesday 31 March 2026 04:20:49 +0000 (0:00:00.706) 0:02:06.749 ********* 2026-03-31 04:20:59.732972 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:20:59.732977 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:20:59.732983 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:20:59.732988 | orchestrator | 2026-03-31 04:20:59.732993 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-03-31 04:20:59.732999 | orchestrator | Tuesday 31 March 2026 04:20:51 +0000 (0:00:01.570) 0:02:08.319 ********* 2026-03-31 04:20:59.733004 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:20:59.733010 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:20:59.733015 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:20:59.733021 | orchestrator | 2026-03-31 04:20:59.733026 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-03-31 04:20:59.733032 | orchestrator | Tuesday 31 March 2026 04:20:52 +0000 (0:00:01.352) 0:02:09.672 ********* 2026-03-31 04:20:59.733042 | orchestrator | changed: [testbed-node-4] 2026-03-31 04:21:37.555943 | orchestrator | changed: [testbed-node-3] 2026-03-31 04:21:37.556060 | orchestrator | changed: [testbed-node-5] 2026-03-31 04:21:37.556076 | orchestrator | 2026-03-31 04:21:37.556089 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-31 04:21:37.556102 | orchestrator | 2026-03-31 04:21:37.556114 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-31 04:21:37.556126 | orchestrator | Tuesday 31 March 2026 04:20:59 +0000 (0:00:07.099) 0:02:16.772 ********* 2026-03-31 04:21:37.556138 | orchestrator | ok: [testbed-manager] 2026-03-31 04:21:37.556150 | orchestrator | 2026-03-31 04:21:37.556161 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-31 04:21:37.556173 | orchestrator | Tuesday 31 March 2026 04:21:00 +0000 (0:00:00.832) 0:02:17.605 ********* 2026-03-31 04:21:37.556184 | orchestrator | ok: [testbed-manager] 2026-03-31 04:21:37.556195 | orchestrator | 2026-03-31 04:21:37.556206 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-31 04:21:37.556217 | orchestrator | Tuesday 31 March 2026 04:21:01 +0000 (0:00:00.571) 0:02:18.177 ********* 2026-03-31 04:21:37.556228 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-31 04:21:37.556239 | orchestrator | 2026-03-31 04:21:37.556250 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-31 04:21:37.556261 | orchestrator | Tuesday 31 March 2026 04:21:01 +0000 (0:00:00.606) 0:02:18.783 ********* 2026-03-31 04:21:37.556272 | orchestrator | changed: [testbed-manager] 2026-03-31 04:21:37.556283 | orchestrator | 2026-03-31 04:21:37.556294 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-31 04:21:37.556305 | orchestrator | Tuesday 31 March 2026 04:21:02 +0000 (0:00:01.108) 0:02:19.892 ********* 2026-03-31 04:21:37.556315 | orchestrator | changed: [testbed-manager] 2026-03-31 04:21:37.556326 | orchestrator | 2026-03-31 04:21:37.556337 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-31 04:21:37.556372 | orchestrator | Tuesday 31 March 2026 04:21:03 +0000 (0:00:00.668) 0:02:20.560 ********* 2026-03-31 04:21:37.556384 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-31 04:21:37.556395 | orchestrator | 2026-03-31 04:21:37.556406 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-31 04:21:37.556417 | orchestrator | Tuesday 31 March 2026 04:21:05 +0000 (0:00:01.677) 0:02:22.238 ********* 2026-03-31 04:21:37.556428 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-31 04:21:37.556439 | orchestrator | 2026-03-31 04:21:37.556450 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-31 04:21:37.556463 | orchestrator | Tuesday 31 March 2026 04:21:06 +0000 (0:00:00.944) 0:02:23.182 ********* 2026-03-31 04:21:37.556476 | orchestrator | ok: [testbed-manager] 2026-03-31 04:21:37.556488 | orchestrator | 2026-03-31 04:21:37.556501 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-31 04:21:37.556513 | orchestrator | Tuesday 31 March 2026 04:21:06 +0000 (0:00:00.821) 0:02:24.004 ********* 2026-03-31 04:21:37.556525 | orchestrator | ok: [testbed-manager] 2026-03-31 04:21:37.556538 | orchestrator | 2026-03-31 04:21:37.556550 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-03-31 04:21:37.556563 | orchestrator | 2026-03-31 04:21:37.556575 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-03-31 04:21:37.556588 | orchestrator | Tuesday 31 March 2026 04:21:07 +0000 (0:00:00.564) 0:02:24.569 ********* 2026-03-31 04:21:37.556600 | orchestrator | ok: [testbed-manager] 2026-03-31 04:21:37.556613 | orchestrator | 2026-03-31 04:21:37.556626 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-03-31 04:21:37.556638 | orchestrator | Tuesday 31 March 2026 04:21:07 +0000 (0:00:00.241) 0:02:24.810 ********* 2026-03-31 04:21:37.556651 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-03-31 04:21:37.556665 | orchestrator | 2026-03-31 04:21:37.556704 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-03-31 04:21:37.556725 | orchestrator | Tuesday 31 March 2026 04:21:08 +0000 (0:00:00.283) 0:02:25.093 ********* 2026-03-31 04:21:37.556738 | orchestrator | ok: [testbed-manager] 2026-03-31 04:21:37.556750 | orchestrator | 2026-03-31 04:21:37.556769 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-03-31 04:21:37.556787 | orchestrator | Tuesday 31 March 2026 04:21:09 +0000 (0:00:01.052) 0:02:26.146 ********* 2026-03-31 04:21:37.556806 | orchestrator | ok: [testbed-manager] 2026-03-31 04:21:37.556826 | orchestrator | 2026-03-31 04:21:37.556844 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-03-31 04:21:37.556860 | orchestrator | Tuesday 31 March 2026 04:21:11 +0000 (0:00:02.183) 0:02:28.330 ********* 2026-03-31 04:21:37.556871 | orchestrator | ok: [testbed-manager] 2026-03-31 04:21:37.556882 | orchestrator | 2026-03-31 04:21:37.556910 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-03-31 04:21:37.556934 | orchestrator | Tuesday 31 March 2026 04:21:11 +0000 (0:00:00.519) 0:02:28.850 ********* 2026-03-31 04:21:37.556946 | orchestrator | ok: [testbed-manager] 2026-03-31 04:21:37.556957 | orchestrator | 2026-03-31 04:21:37.556968 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-03-31 04:21:37.556979 | orchestrator | Tuesday 31 March 2026 04:21:12 +0000 (0:00:00.785) 0:02:29.635 ********* 2026-03-31 04:21:37.556990 | orchestrator | ok: [testbed-manager] 2026-03-31 04:21:37.557000 | orchestrator | 2026-03-31 04:21:37.557012 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-03-31 04:21:37.557031 | orchestrator | Tuesday 31 March 2026 04:21:13 +0000 (0:00:00.781) 0:02:30.416 ********* 2026-03-31 04:21:37.557049 | orchestrator | ok: [testbed-manager] 2026-03-31 04:21:37.557066 | orchestrator | 2026-03-31 04:21:37.557102 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-03-31 04:21:37.557121 | orchestrator | Tuesday 31 March 2026 04:21:15 +0000 (0:00:01.720) 0:02:32.137 ********* 2026-03-31 04:21:37.557154 | orchestrator | ok: [testbed-manager] 2026-03-31 04:21:37.557173 | orchestrator | 2026-03-31 04:21:37.557185 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-03-31 04:21:37.557196 | orchestrator | 2026-03-31 04:21:37.557207 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-03-31 04:21:37.557238 | orchestrator | Tuesday 31 March 2026 04:21:15 +0000 (0:00:00.650) 0:02:32.787 ********* 2026-03-31 04:21:37.557249 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:21:37.557260 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:21:37.557271 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:21:37.557282 | orchestrator | 2026-03-31 04:21:37.557293 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-03-31 04:21:37.557304 | orchestrator | Tuesday 31 March 2026 04:21:16 +0000 (0:00:00.373) 0:02:33.161 ********* 2026-03-31 04:21:37.557315 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:21:37.557327 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:21:37.557338 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:21:37.557349 | orchestrator | 2026-03-31 04:21:37.557360 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-03-31 04:21:37.557371 | orchestrator | Tuesday 31 March 2026 04:21:16 +0000 (0:00:00.629) 0:02:33.790 ********* 2026-03-31 04:21:37.557382 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 04:21:37.557393 | orchestrator | 2026-03-31 04:21:37.557404 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-03-31 04:21:37.557415 | orchestrator | Tuesday 31 March 2026 04:21:17 +0000 (0:00:00.635) 0:02:34.426 ********* 2026-03-31 04:21:37.557426 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-31 04:21:37.557437 | orchestrator | 2026-03-31 04:21:37.557448 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-03-31 04:21:37.557459 | orchestrator | Tuesday 31 March 2026 04:21:18 +0000 (0:00:00.884) 0:02:35.311 ********* 2026-03-31 04:21:37.557470 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-31 04:21:37.557481 | orchestrator | 2026-03-31 04:21:37.557492 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-03-31 04:21:37.557503 | orchestrator | Tuesday 31 March 2026 04:21:19 +0000 (0:00:01.039) 0:02:36.350 ********* 2026-03-31 04:21:37.557514 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:21:37.557525 | orchestrator | 2026-03-31 04:21:37.557536 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-03-31 04:21:37.557547 | orchestrator | Tuesday 31 March 2026 04:21:19 +0000 (0:00:00.159) 0:02:36.509 ********* 2026-03-31 04:21:37.557558 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-31 04:21:37.557569 | orchestrator | 2026-03-31 04:21:37.557580 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-03-31 04:21:37.557591 | orchestrator | Tuesday 31 March 2026 04:21:20 +0000 (0:00:01.121) 0:02:37.631 ********* 2026-03-31 04:21:37.557602 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-31 04:21:37.557613 | orchestrator | 2026-03-31 04:21:37.557624 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-03-31 04:21:37.557652 | orchestrator | Tuesday 31 March 2026 04:21:22 +0000 (0:00:01.932) 0:02:39.563 ********* 2026-03-31 04:21:37.557664 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-31 04:21:37.557675 | orchestrator | 2026-03-31 04:21:37.557765 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-03-31 04:21:37.557776 | orchestrator | Tuesday 31 March 2026 04:21:22 +0000 (0:00:00.206) 0:02:39.770 ********* 2026-03-31 04:21:37.557787 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-31 04:21:37.557798 | orchestrator | 2026-03-31 04:21:37.557809 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-03-31 04:21:37.557820 | orchestrator | Tuesday 31 March 2026 04:21:22 +0000 (0:00:00.180) 0:02:39.950 ********* 2026-03-31 04:21:37.557830 | orchestrator | ok: [testbed-node-0 -> localhost] => { 2026-03-31 04:21:37.557851 | orchestrator |  "msg": "Installed Cilium version: 1.18.2, Target Cilium version: v1.18.2, Update needed: False\n" 2026-03-31 04:21:37.557864 | orchestrator | } 2026-03-31 04:21:37.557875 | orchestrator | 2026-03-31 04:21:37.557886 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-03-31 04:21:37.557898 | orchestrator | Tuesday 31 March 2026 04:21:23 +0000 (0:00:00.194) 0:02:40.145 ********* 2026-03-31 04:21:37.557909 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:21:37.557920 | orchestrator | 2026-03-31 04:21:37.557930 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-03-31 04:21:37.557941 | orchestrator | Tuesday 31 March 2026 04:21:23 +0000 (0:00:00.161) 0:02:40.307 ********* 2026-03-31 04:21:37.557952 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-03-31 04:21:37.557963 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-03-31 04:21:37.557974 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-03-31 04:21:37.557985 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-03-31 04:21:37.557996 | orchestrator | 2026-03-31 04:21:37.558007 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-03-31 04:21:37.558079 | orchestrator | Tuesday 31 March 2026 04:21:30 +0000 (0:00:07.533) 0:02:47.840 ********* 2026-03-31 04:21:37.558092 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-31 04:21:37.558103 | orchestrator | 2026-03-31 04:21:37.558114 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-03-31 04:21:37.558131 | orchestrator | Tuesday 31 March 2026 04:21:32 +0000 (0:00:01.512) 0:02:49.353 ********* 2026-03-31 04:21:37.558142 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-31 04:21:37.558153 | orchestrator | 2026-03-31 04:21:37.558164 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-03-31 04:21:37.558175 | orchestrator | Tuesday 31 March 2026 04:21:34 +0000 (0:00:01.743) 0:02:51.097 ********* 2026-03-31 04:21:37.558222 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-31 04:21:37.558234 | orchestrator | 2026-03-31 04:21:37.558245 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-03-31 04:21:37.558256 | orchestrator | Tuesday 31 March 2026 04:21:37 +0000 (0:00:03.329) 0:02:54.426 ********* 2026-03-31 04:21:37.558267 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:21:37.558278 | orchestrator | 2026-03-31 04:21:37.558299 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-03-31 04:21:58.619274 | orchestrator | Tuesday 31 March 2026 04:21:37 +0000 (0:00:00.167) 0:02:54.593 ********* 2026-03-31 04:21:58.619382 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-03-31 04:21:58.619398 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-03-31 04:21:58.619409 | orchestrator | 2026-03-31 04:21:58.619420 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-03-31 04:21:58.619430 | orchestrator | Tuesday 31 March 2026 04:21:39 +0000 (0:00:02.128) 0:02:56.722 ********* 2026-03-31 04:21:58.619440 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:21:58.619451 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:21:58.619461 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:21:58.619470 | orchestrator | 2026-03-31 04:21:58.619480 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-03-31 04:21:58.619490 | orchestrator | Tuesday 31 March 2026 04:21:40 +0000 (0:00:00.814) 0:02:57.536 ********* 2026-03-31 04:21:58.619500 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:21:58.619510 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:21:58.619520 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:21:58.619529 | orchestrator | 2026-03-31 04:21:58.619539 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-03-31 04:21:58.619549 | orchestrator | 2026-03-31 04:21:58.619558 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-03-31 04:21:58.619590 | orchestrator | Tuesday 31 March 2026 04:21:41 +0000 (0:00:00.958) 0:02:58.494 ********* 2026-03-31 04:21:58.619601 | orchestrator | ok: [testbed-manager] 2026-03-31 04:21:58.619610 | orchestrator | 2026-03-31 04:21:58.619620 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-03-31 04:21:58.619629 | orchestrator | Tuesday 31 March 2026 04:21:41 +0000 (0:00:00.182) 0:02:58.677 ********* 2026-03-31 04:21:58.619639 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-03-31 04:21:58.619650 | orchestrator | 2026-03-31 04:21:58.619659 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-03-31 04:21:58.619754 | orchestrator | Tuesday 31 March 2026 04:21:42 +0000 (0:00:00.546) 0:02:59.224 ********* 2026-03-31 04:21:58.619765 | orchestrator | ok: [testbed-manager] 2026-03-31 04:21:58.619775 | orchestrator | 2026-03-31 04:21:58.619785 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-03-31 04:21:58.619794 | orchestrator | 2026-03-31 04:21:58.619804 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-03-31 04:21:58.619814 | orchestrator | Tuesday 31 March 2026 04:21:46 +0000 (0:00:04.402) 0:03:03.626 ********* 2026-03-31 04:21:58.619825 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:21:58.619836 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:21:58.619847 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:21:58.619859 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:21:58.619869 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:21:58.619881 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:21:58.619892 | orchestrator | 2026-03-31 04:21:58.619903 | orchestrator | TASK [Manage labels] *********************************************************** 2026-03-31 04:21:58.619914 | orchestrator | Tuesday 31 March 2026 04:21:47 +0000 (0:00:00.734) 0:03:04.360 ********* 2026-03-31 04:21:58.619925 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-31 04:21:58.619936 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-31 04:21:58.619947 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-31 04:21:58.619958 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-31 04:21:58.619970 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-31 04:21:58.619981 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-31 04:21:58.619992 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-31 04:21:58.620003 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-31 04:21:58.620014 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-31 04:21:58.620025 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-31 04:21:58.620036 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-31 04:21:58.620047 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-31 04:21:58.620057 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-31 04:21:58.620069 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-31 04:21:58.620080 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-31 04:21:58.620091 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-31 04:21:58.620102 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-31 04:21:58.620113 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-31 04:21:58.620133 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-31 04:21:58.620144 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-31 04:21:58.620156 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-31 04:21:58.620182 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-31 04:21:58.620193 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-31 04:21:58.620202 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-31 04:21:58.620212 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-31 04:21:58.620221 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-31 04:21:58.620231 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-31 04:21:58.620241 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-31 04:21:58.620250 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-31 04:21:58.620260 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-31 04:21:58.620269 | orchestrator | 2026-03-31 04:21:58.620279 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-03-31 04:21:58.620288 | orchestrator | Tuesday 31 March 2026 04:21:57 +0000 (0:00:09.709) 0:03:14.069 ********* 2026-03-31 04:21:58.620298 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:21:58.620308 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:21:58.620317 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:21:58.620327 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:21:58.620337 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:21:58.620346 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:21:58.620356 | orchestrator | 2026-03-31 04:21:58.620365 | orchestrator | TASK [Manage taints] *********************************************************** 2026-03-31 04:21:58.620375 | orchestrator | Tuesday 31 March 2026 04:21:57 +0000 (0:00:00.960) 0:03:15.030 ********* 2026-03-31 04:21:58.620385 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:21:58.620394 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:21:58.620404 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:21:58.620413 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:21:58.620423 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:21:58.620433 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:21:58.620442 | orchestrator | 2026-03-31 04:21:58.620452 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 04:21:58.620462 | orchestrator | testbed-manager : ok=21  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 04:21:58.620474 | orchestrator | testbed-node-0 : ok=53  changed=12  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-31 04:21:58.620484 | orchestrator | testbed-node-1 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-31 04:21:58.620494 | orchestrator | testbed-node-2 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-31 04:21:58.620504 | orchestrator | testbed-node-3 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-31 04:21:58.620513 | orchestrator | testbed-node-4 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-31 04:21:58.620523 | orchestrator | testbed-node-5 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-31 04:21:58.620542 | orchestrator | 2026-03-31 04:21:58.620559 | orchestrator | 2026-03-31 04:21:58.620575 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 04:21:58.620590 | orchestrator | Tuesday 31 March 2026 04:21:58 +0000 (0:00:00.610) 0:03:15.640 ********* 2026-03-31 04:21:58.620606 | orchestrator | =============================================================================== 2026-03-31 04:21:58.620641 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 33.61s 2026-03-31 04:21:58.620657 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 24.70s 2026-03-31 04:21:58.620701 | orchestrator | Manage labels ----------------------------------------------------------- 9.71s 2026-03-31 04:21:58.620717 | orchestrator | k3s_server_post : Wait for Cilium resources ----------------------------- 7.53s 2026-03-31 04:21:58.620733 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 7.10s 2026-03-31 04:21:58.620755 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 4.40s 2026-03-31 04:21:58.620773 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.37s 2026-03-31 04:21:58.620789 | orchestrator | k3s_server_post : Apply BGP manifests ----------------------------------- 3.33s 2026-03-31 04:21:58.620805 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.52s 2026-03-31 04:21:58.620815 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 2.23s 2026-03-31 04:21:58.620825 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 2.18s 2026-03-31 04:21:58.620834 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.13s 2026-03-31 04:21:58.620853 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 2.12s 2026-03-31 04:21:59.179918 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.07s 2026-03-31 04:21:59.180045 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 2.04s 2026-03-31 04:21:59.180070 | orchestrator | k3s_server_post : Check Cilium version ---------------------------------- 1.93s 2026-03-31 04:21:59.180090 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 1.88s 2026-03-31 04:21:59.180109 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 1.82s 2026-03-31 04:21:59.180128 | orchestrator | k3s_prereq : Add /usr/local/bin to sudo secure_path --------------------- 1.76s 2026-03-31 04:21:59.180147 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 1.75s 2026-03-31 04:21:59.621131 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-03-31 04:21:59.621225 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/200-infrastructure.sh 2026-03-31 04:21:59.628409 | orchestrator | + set -e 2026-03-31 04:21:59.628787 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-31 04:21:59.628816 | orchestrator | ++ export INTERACTIVE=false 2026-03-31 04:21:59.628826 | orchestrator | ++ INTERACTIVE=false 2026-03-31 04:21:59.628835 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-31 04:21:59.628843 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-31 04:21:59.628851 | orchestrator | + osism apply openstackclient 2026-03-31 04:22:12.218594 | orchestrator | 2026-03-31 04:22:12 | INFO  | Task bf825c5e-f7f1-45ac-bb55-f93484fc3e7b (openstackclient) was prepared for execution. 2026-03-31 04:22:12.218805 | orchestrator | 2026-03-31 04:22:12 | INFO  | It takes a moment until task bf825c5e-f7f1-45ac-bb55-f93484fc3e7b (openstackclient) has been started and output is visible here. 2026-03-31 04:22:23.524217 | orchestrator | 2026-03-31 04:22:23.524328 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-03-31 04:22:23.524344 | orchestrator | 2026-03-31 04:22:23.524354 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-03-31 04:22:23.524364 | orchestrator | Tuesday 31 March 2026 04:22:16 +0000 (0:00:00.264) 0:00:00.264 ********* 2026-03-31 04:22:23.524396 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-03-31 04:22:23.524407 | orchestrator | 2026-03-31 04:22:23.524416 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-03-31 04:22:23.524425 | orchestrator | Tuesday 31 March 2026 04:22:17 +0000 (0:00:00.258) 0:00:00.523 ********* 2026-03-31 04:22:23.524435 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-03-31 04:22:23.524445 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient/data) 2026-03-31 04:22:23.524454 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-03-31 04:22:23.524463 | orchestrator | 2026-03-31 04:22:23.524472 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-03-31 04:22:23.524481 | orchestrator | Tuesday 31 March 2026 04:22:18 +0000 (0:00:01.405) 0:00:01.929 ********* 2026-03-31 04:22:23.524490 | orchestrator | ok: [testbed-manager] 2026-03-31 04:22:23.524501 | orchestrator | 2026-03-31 04:22:23.524509 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-03-31 04:22:23.524518 | orchestrator | Tuesday 31 March 2026 04:22:19 +0000 (0:00:01.276) 0:00:03.205 ********* 2026-03-31 04:22:23.524527 | orchestrator | ok: [testbed-manager] 2026-03-31 04:22:23.524536 | orchestrator | 2026-03-31 04:22:23.524545 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-03-31 04:22:23.524554 | orchestrator | Tuesday 31 March 2026 04:22:21 +0000 (0:00:01.384) 0:00:04.590 ********* 2026-03-31 04:22:23.524563 | orchestrator | ok: [testbed-manager] 2026-03-31 04:22:23.524572 | orchestrator | 2026-03-31 04:22:23.524581 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-03-31 04:22:23.524590 | orchestrator | Tuesday 31 March 2026 04:22:22 +0000 (0:00:01.196) 0:00:05.787 ********* 2026-03-31 04:22:23.524599 | orchestrator | ok: [testbed-manager] 2026-03-31 04:22:23.524608 | orchestrator | 2026-03-31 04:22:23.524617 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 04:22:23.524626 | orchestrator | testbed-manager : ok=6  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 04:22:23.524635 | orchestrator | 2026-03-31 04:22:23.524708 | orchestrator | 2026-03-31 04:22:23.524720 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 04:22:23.524729 | orchestrator | Tuesday 31 March 2026 04:22:23 +0000 (0:00:00.548) 0:00:06.335 ********* 2026-03-31 04:22:23.524737 | orchestrator | =============================================================================== 2026-03-31 04:22:23.524746 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.41s 2026-03-31 04:22:23.524755 | orchestrator | osism.services.openstackclient : Manage openstackclient service --------- 1.39s 2026-03-31 04:22:23.524763 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.28s 2026-03-31 04:22:23.524787 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.20s 2026-03-31 04:22:23.524798 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.55s 2026-03-31 04:22:23.524808 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.26s 2026-03-31 04:22:24.055577 | orchestrator | + osism apply -a upgrade common 2026-03-31 04:22:26.427422 | orchestrator | 2026-03-31 04:22:26 | INFO  | Task 0d0fc53b-44fe-458c-be34-af2f08187006 (common) was prepared for execution. 2026-03-31 04:22:26.427484 | orchestrator | 2026-03-31 04:22:26 | INFO  | It takes a moment until task 0d0fc53b-44fe-458c-be34-af2f08187006 (common) has been started and output is visible here. 2026-03-31 04:22:40.169216 | orchestrator | 2026-03-31 04:22:40.169325 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-31 04:22:40.169342 | orchestrator | 2026-03-31 04:22:40.169354 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-31 04:22:40.169366 | orchestrator | Tuesday 31 March 2026 04:22:31 +0000 (0:00:00.308) 0:00:00.309 ********* 2026-03-31 04:22:40.169403 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 04:22:40.169416 | orchestrator | 2026-03-31 04:22:40.169428 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-31 04:22:40.169439 | orchestrator | Tuesday 31 March 2026 04:22:33 +0000 (0:00:01.804) 0:00:02.113 ********* 2026-03-31 04:22:40.169450 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-31 04:22:40.169462 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-31 04:22:40.169473 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-31 04:22:40.169485 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-31 04:22:40.169495 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-31 04:22:40.169506 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-31 04:22:40.169518 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-31 04:22:40.169529 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-31 04:22:40.169540 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-31 04:22:40.169551 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-31 04:22:40.169562 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-31 04:22:40.169573 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-31 04:22:40.169584 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-31 04:22:40.169595 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-31 04:22:40.169606 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-31 04:22:40.169617 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-31 04:22:40.169628 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-31 04:22:40.169707 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-31 04:22:40.169718 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-31 04:22:40.169729 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-31 04:22:40.169740 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-31 04:22:40.169756 | orchestrator | 2026-03-31 04:22:40.169775 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-31 04:22:40.169791 | orchestrator | Tuesday 31 March 2026 04:22:35 +0000 (0:00:02.322) 0:00:04.435 ********* 2026-03-31 04:22:40.169805 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 04:22:40.169820 | orchestrator | 2026-03-31 04:22:40.169832 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-31 04:22:40.169846 | orchestrator | Tuesday 31 March 2026 04:22:37 +0000 (0:00:01.993) 0:00:06.429 ********* 2026-03-31 04:22:40.169863 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 04:22:40.169890 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 04:22:40.169939 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 04:22:40.169954 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 04:22:40.170006 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 04:22:40.170076 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 04:22:40.170090 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 04:22:40.170138 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:22:40.170167 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:22:40.170198 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:22:41.584155 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:22:41.584280 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:22:41.584312 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:22:41.584326 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:22:41.584366 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:22:41.584436 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:22:41.584451 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:22:41.584486 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:22:41.584499 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:22:41.584510 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:22:41.584521 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:22:41.584534 | orchestrator | 2026-03-31 04:22:41.584546 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-31 04:22:41.584558 | orchestrator | Tuesday 31 March 2026 04:22:40 +0000 (0:00:03.549) 0:00:09.978 ********* 2026-03-31 04:22:41.584573 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-31 04:22:41.584591 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:22:41.584620 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:22:41.584666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-31 04:22:41.584699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:22:42.410330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:22:42.410488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-31 04:22:42.410517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:22:42.410534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:22:42.410577 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:22:42.410594 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:22:42.410609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-31 04:22:42.410663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:22:42.410684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:22:42.410698 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:22:42.410744 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-31 04:22:42.410757 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:22:42.410771 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:22:42.410785 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:22:42.410799 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:22:42.410813 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-31 04:22:42.410840 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:22:42.410851 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:22:42.410860 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:22:42.410874 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-31 04:22:42.410892 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:22:43.187041 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:22:43.187114 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:22:43.187120 | orchestrator | 2026-03-31 04:22:43.187125 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-31 04:22:43.187130 | orchestrator | Tuesday 31 March 2026 04:22:42 +0000 (0:00:01.430) 0:00:11.409 ********* 2026-03-31 04:22:43.187135 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-31 04:22:43.187160 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:22:43.187165 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:22:43.187170 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:22:43.187174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-31 04:22:43.187179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:22:43.187183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:22:43.187187 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:22:43.187201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-31 04:22:43.187205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:22:43.187214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:22:43.187218 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:22:43.187222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-31 04:22:43.187237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:22:43.187244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:22:43.187248 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:22:43.187252 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-31 04:22:43.187260 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:22:49.451813 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:22:49.451956 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:22:49.451977 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-31 04:22:49.451992 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:22:49.452004 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:22:49.452016 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:22:49.452028 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-31 04:22:49.452055 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:22:49.452068 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:22:49.452079 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:22:49.452091 | orchestrator | 2026-03-31 04:22:49.452103 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-31 04:22:49.452116 | orchestrator | Tuesday 31 March 2026 04:22:44 +0000 (0:00:02.215) 0:00:13.625 ********* 2026-03-31 04:22:49.452126 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:22:49.452138 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:22:49.452148 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:22:49.452159 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:22:49.452199 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:22:49.452211 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:22:49.452223 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:22:49.452234 | orchestrator | 2026-03-31 04:22:49.452245 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-31 04:22:49.452260 | orchestrator | Tuesday 31 March 2026 04:22:45 +0000 (0:00:01.104) 0:00:14.729 ********* 2026-03-31 04:22:49.452279 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:22:49.452298 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:22:49.452316 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:22:49.452336 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:22:49.452356 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:22:49.452376 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:22:49.452405 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:22:49.452427 | orchestrator | 2026-03-31 04:22:49.452446 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-31 04:22:49.452465 | orchestrator | Tuesday 31 March 2026 04:22:46 +0000 (0:00:01.019) 0:00:15.748 ********* 2026-03-31 04:22:49.452487 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 04:22:49.452507 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 04:22:49.452527 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 04:22:49.452556 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 04:22:49.452577 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 04:22:49.452596 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 04:22:49.452674 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:22:52.810770 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 04:22:52.810859 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:22:52.810869 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:22:52.810878 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:22:52.810884 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:22:52.810912 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:22:52.810919 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:22:52.810942 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:22:52.810965 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:22:52.810973 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:22:52.810979 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:22:52.810984 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:22:52.810994 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:22:52.811005 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:22:52.811010 | orchestrator | 2026-03-31 04:22:52.811017 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-31 04:22:52.811025 | orchestrator | Tuesday 31 March 2026 04:22:50 +0000 (0:00:03.577) 0:00:19.326 ********* 2026-03-31 04:22:52.811031 | orchestrator | [WARNING]: Skipped 2026-03-31 04:22:52.811039 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-31 04:22:52.811046 | orchestrator | to this access issue: 2026-03-31 04:22:52.811052 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-31 04:22:52.811058 | orchestrator | directory 2026-03-31 04:22:52.811064 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-31 04:22:52.811071 | orchestrator | 2026-03-31 04:22:52.811077 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-31 04:22:52.811083 | orchestrator | Tuesday 31 March 2026 04:22:51 +0000 (0:00:01.151) 0:00:20.477 ********* 2026-03-31 04:22:52.811089 | orchestrator | [WARNING]: Skipped 2026-03-31 04:22:52.811094 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-31 04:22:52.811100 | orchestrator | to this access issue: 2026-03-31 04:22:52.811106 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-31 04:22:52.811113 | orchestrator | directory 2026-03-31 04:22:52.811124 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-31 04:23:04.281155 | orchestrator | 2026-03-31 04:23:04.281240 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-31 04:23:04.281249 | orchestrator | Tuesday 31 March 2026 04:22:52 +0000 (0:00:01.340) 0:00:21.818 ********* 2026-03-31 04:23:04.281256 | orchestrator | [WARNING]: Skipped 2026-03-31 04:23:04.281262 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-31 04:23:04.281269 | orchestrator | to this access issue: 2026-03-31 04:23:04.281275 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-31 04:23:04.281280 | orchestrator | directory 2026-03-31 04:23:04.281285 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-31 04:23:04.281291 | orchestrator | 2026-03-31 04:23:04.281297 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-31 04:23:04.281302 | orchestrator | Tuesday 31 March 2026 04:22:53 +0000 (0:00:00.951) 0:00:22.769 ********* 2026-03-31 04:23:04.281307 | orchestrator | [WARNING]: Skipped 2026-03-31 04:23:04.281312 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-31 04:23:04.281318 | orchestrator | to this access issue: 2026-03-31 04:23:04.281323 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-31 04:23:04.281328 | orchestrator | directory 2026-03-31 04:23:04.281333 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-31 04:23:04.281338 | orchestrator | 2026-03-31 04:23:04.281344 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-31 04:23:04.281349 | orchestrator | Tuesday 31 March 2026 04:22:54 +0000 (0:00:00.984) 0:00:23.754 ********* 2026-03-31 04:23:04.281354 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:23:04.281359 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:23:04.281364 | orchestrator | ok: [testbed-manager] 2026-03-31 04:23:04.281369 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:23:04.281375 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:23:04.281380 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:23:04.281400 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:23:04.281406 | orchestrator | 2026-03-31 04:23:04.281412 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-31 04:23:04.281417 | orchestrator | Tuesday 31 March 2026 04:22:57 +0000 (0:00:02.965) 0:00:26.720 ********* 2026-03-31 04:23:04.281422 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-31 04:23:04.281428 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-31 04:23:04.281433 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-31 04:23:04.281438 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-31 04:23:04.281443 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-31 04:23:04.281448 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-31 04:23:04.281454 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-31 04:23:04.281459 | orchestrator | 2026-03-31 04:23:04.281464 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-31 04:23:04.281480 | orchestrator | Tuesday 31 March 2026 04:23:00 +0000 (0:00:02.531) 0:00:29.251 ********* 2026-03-31 04:23:04.281486 | orchestrator | ok: [testbed-manager] 2026-03-31 04:23:04.281491 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:23:04.281496 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:23:04.281501 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:23:04.281506 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:23:04.281511 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:23:04.281516 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:23:04.281521 | orchestrator | 2026-03-31 04:23:04.281526 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-31 04:23:04.281532 | orchestrator | Tuesday 31 March 2026 04:23:02 +0000 (0:00:02.313) 0:00:31.565 ********* 2026-03-31 04:23:04.281540 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 04:23:04.281548 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:23:04.281566 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 04:23:04.281572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:23:04.281582 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 04:23:04.281588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:23:04.281597 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 04:23:04.281642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:23:04.281650 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:23:04.281663 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:23:11.578577 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 04:23:11.578794 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:23:11.578824 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:23:11.578845 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 04:23:11.578857 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:23:11.578868 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:23:11.578878 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 04:23:11.578909 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:23:11.578931 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:23:11.578943 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:23:11.578953 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:23:11.578964 | orchestrator | 2026-03-31 04:23:11.578976 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-31 04:23:11.579015 | orchestrator | Tuesday 31 March 2026 04:23:04 +0000 (0:00:01.717) 0:00:33.283 ********* 2026-03-31 04:23:11.579026 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-31 04:23:11.579115 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-31 04:23:11.579137 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-31 04:23:11.579155 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-31 04:23:11.579172 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-31 04:23:11.579189 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-31 04:23:11.579215 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-31 04:23:11.579231 | orchestrator | 2026-03-31 04:23:11.579245 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-31 04:23:11.579263 | orchestrator | Tuesday 31 March 2026 04:23:06 +0000 (0:00:02.311) 0:00:35.594 ********* 2026-03-31 04:23:11.579281 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-31 04:23:11.579298 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-31 04:23:11.579316 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-31 04:23:11.579333 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-31 04:23:11.579350 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-31 04:23:11.579363 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-31 04:23:11.579374 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-31 04:23:11.579385 | orchestrator | 2026-03-31 04:23:11.579396 | orchestrator | TASK [common : Check common containers] **************************************** 2026-03-31 04:23:11.579407 | orchestrator | Tuesday 31 March 2026 04:23:08 +0000 (0:00:02.236) 0:00:37.831 ********* 2026-03-31 04:23:11.579419 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 04:23:11.579453 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 04:23:14.093287 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 04:23:14.093382 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 04:23:14.093396 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 04:23:14.093420 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 04:23:14.093430 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-31 04:23:14.093440 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:23:14.093470 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:23:14.093495 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:23:14.093505 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:23:14.093515 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:23:14.093529 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:23:14.093539 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:23:14.093554 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:23:14.093566 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:23:14.093583 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:23:14.713306 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:23:14.713407 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:23:14.713423 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:23:14.713435 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:23:14.713454 | orchestrator | 2026-03-31 04:23:14.713476 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-31 04:23:14.713495 | orchestrator | Tuesday 31 March 2026 04:23:12 +0000 (0:00:03.675) 0:00:41.506 ********* 2026-03-31 04:23:14.713514 | orchestrator | 2026-03-31 04:23:14.713554 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-31 04:23:14.713573 | orchestrator | Tuesday 31 March 2026 04:23:12 +0000 (0:00:00.290) 0:00:41.797 ********* 2026-03-31 04:23:14.713591 | orchestrator | 2026-03-31 04:23:14.713680 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-31 04:23:14.713735 | orchestrator | Tuesday 31 March 2026 04:23:12 +0000 (0:00:00.075) 0:00:41.872 ********* 2026-03-31 04:23:14.713752 | orchestrator | 2026-03-31 04:23:14.713769 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-31 04:23:14.713787 | orchestrator | Tuesday 31 March 2026 04:23:12 +0000 (0:00:00.087) 0:00:41.960 ********* 2026-03-31 04:23:14.713803 | orchestrator | 2026-03-31 04:23:14.713821 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-31 04:23:14.713838 | orchestrator | Tuesday 31 March 2026 04:23:13 +0000 (0:00:00.094) 0:00:42.055 ********* 2026-03-31 04:23:14.713856 | orchestrator | 2026-03-31 04:23:14.713875 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-31 04:23:14.713894 | orchestrator | Tuesday 31 March 2026 04:23:13 +0000 (0:00:00.096) 0:00:42.151 ********* 2026-03-31 04:23:14.713912 | orchestrator | 2026-03-31 04:23:14.713933 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-31 04:23:14.713969 | orchestrator | Tuesday 31 March 2026 04:23:13 +0000 (0:00:00.076) 0:00:42.228 ********* 2026-03-31 04:23:14.713989 | orchestrator | 2026-03-31 04:23:14.714007 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 04:23:14.714113 | orchestrator | testbed-manager : ok=16  changed=0 unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-31 04:23:14.714134 | orchestrator | testbed-node-0 : ok=12  changed=0 unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-31 04:23:14.714153 | orchestrator | testbed-node-1 : ok=12  changed=0 unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-31 04:23:14.714170 | orchestrator | testbed-node-2 : ok=12  changed=0 unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-31 04:23:14.714189 | orchestrator | testbed-node-3 : ok=12  changed=0 unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-31 04:23:14.714209 | orchestrator | testbed-node-4 : ok=12  changed=0 unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-31 04:23:14.714229 | orchestrator | testbed-node-5 : ok=12  changed=0 unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-31 04:23:14.714249 | orchestrator | 2026-03-31 04:23:14.714269 | orchestrator | 2026-03-31 04:23:14.714289 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 04:23:14.714336 | orchestrator | Tuesday 31 March 2026 04:23:14 +0000 (0:00:00.875) 0:00:43.103 ********* 2026-03-31 04:23:14.714357 | orchestrator | =============================================================================== 2026-03-31 04:23:14.714377 | orchestrator | common : Check common containers ---------------------------------------- 3.68s 2026-03-31 04:23:14.714396 | orchestrator | common : Copying over config.json files for services -------------------- 3.58s 2026-03-31 04:23:14.714415 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 3.55s 2026-03-31 04:23:14.714435 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 2.97s 2026-03-31 04:23:14.714455 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.53s 2026-03-31 04:23:14.714474 | orchestrator | common : Ensuring config directories exist ------------------------------ 2.32s 2026-03-31 04:23:14.714494 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.31s 2026-03-31 04:23:14.714514 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.31s 2026-03-31 04:23:14.714533 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.24s 2026-03-31 04:23:14.714552 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.22s 2026-03-31 04:23:14.714571 | orchestrator | common : include_tasks -------------------------------------------------- 1.99s 2026-03-31 04:23:14.714638 | orchestrator | common : include_tasks -------------------------------------------------- 1.80s 2026-03-31 04:23:14.714659 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.72s 2026-03-31 04:23:14.714679 | orchestrator | common : Flush handlers ------------------------------------------------- 1.60s 2026-03-31 04:23:14.714698 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.43s 2026-03-31 04:23:14.714719 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.34s 2026-03-31 04:23:14.714731 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.15s 2026-03-31 04:23:14.714742 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.10s 2026-03-31 04:23:14.714753 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.02s 2026-03-31 04:23:14.714764 | orchestrator | common : Find custom fluentd output config files ------------------------ 0.98s 2026-03-31 04:23:15.159272 | orchestrator | + osism apply -a upgrade loadbalancer 2026-03-31 04:23:17.386169 | orchestrator | 2026-03-31 04:23:17 | INFO  | Task df76029f-87f8-44e3-bcfb-d644ffe464d9 (loadbalancer) was prepared for execution. 2026-03-31 04:23:17.386261 | orchestrator | 2026-03-31 04:23:17 | INFO  | It takes a moment until task df76029f-87f8-44e3-bcfb-d644ffe464d9 (loadbalancer) has been started and output is visible here. 2026-03-31 04:23:36.329152 | orchestrator | 2026-03-31 04:23:36.329264 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-31 04:23:36.329283 | orchestrator | 2026-03-31 04:23:36.329294 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-31 04:23:36.329306 | orchestrator | Tuesday 31 March 2026 04:23:22 +0000 (0:00:00.330) 0:00:00.330 ********* 2026-03-31 04:23:36.329316 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:23:36.329329 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:23:36.329338 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:23:36.329346 | orchestrator | 2026-03-31 04:23:36.329353 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-31 04:23:36.329360 | orchestrator | Tuesday 31 March 2026 04:23:22 +0000 (0:00:00.407) 0:00:00.737 ********* 2026-03-31 04:23:36.329367 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-03-31 04:23:36.329375 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-03-31 04:23:36.329381 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-03-31 04:23:36.329388 | orchestrator | 2026-03-31 04:23:36.329395 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-03-31 04:23:36.329402 | orchestrator | 2026-03-31 04:23:36.329409 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-31 04:23:36.329415 | orchestrator | Tuesday 31 March 2026 04:23:23 +0000 (0:00:00.556) 0:00:01.293 ********* 2026-03-31 04:23:36.329423 | orchestrator | included: /ansible/roles/loadbalancer/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 04:23:36.329430 | orchestrator | 2026-03-31 04:23:36.329437 | orchestrator | TASK [loadbalancer : Stop and remove containers for haproxy exporter containers] *** 2026-03-31 04:23:36.329443 | orchestrator | Tuesday 31 March 2026 04:23:23 +0000 (0:00:00.708) 0:00:02.002 ********* 2026-03-31 04:23:36.329451 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:23:36.329458 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:23:36.329465 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:23:36.329471 | orchestrator | 2026-03-31 04:23:36.329478 | orchestrator | TASK [loadbalancer : Removing config for haproxy exporter] ********************* 2026-03-31 04:23:36.329485 | orchestrator | Tuesday 31 March 2026 04:23:24 +0000 (0:00:01.179) 0:00:03.182 ********* 2026-03-31 04:23:36.329492 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:23:36.329498 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:23:36.329505 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:23:36.329512 | orchestrator | 2026-03-31 04:23:36.329518 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-03-31 04:23:36.329545 | orchestrator | Tuesday 31 March 2026 04:23:25 +0000 (0:00:00.760) 0:00:03.943 ********* 2026-03-31 04:23:36.329552 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:23:36.329559 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:23:36.329565 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:23:36.329572 | orchestrator | 2026-03-31 04:23:36.329579 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-31 04:23:36.329585 | orchestrator | Tuesday 31 March 2026 04:23:26 +0000 (0:00:00.697) 0:00:04.640 ********* 2026-03-31 04:23:36.329675 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 04:23:36.329684 | orchestrator | 2026-03-31 04:23:36.329691 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-03-31 04:23:36.329698 | orchestrator | Tuesday 31 March 2026 04:23:27 +0000 (0:00:00.946) 0:00:05.587 ********* 2026-03-31 04:23:36.329705 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:23:36.329713 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:23:36.329720 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:23:36.329728 | orchestrator | 2026-03-31 04:23:36.329736 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-03-31 04:23:36.329744 | orchestrator | Tuesday 31 March 2026 04:23:28 +0000 (0:00:00.717) 0:00:06.304 ********* 2026-03-31 04:23:36.329751 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-31 04:23:36.329759 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-31 04:23:36.329766 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-31 04:23:36.329774 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-31 04:23:36.329781 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-31 04:23:36.329789 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-31 04:23:36.329813 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-31 04:23:36.329822 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-31 04:23:36.329830 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-31 04:23:36.329837 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-31 04:23:36.329845 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-31 04:23:36.329852 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-31 04:23:36.329859 | orchestrator | 2026-03-31 04:23:36.329867 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-31 04:23:36.329874 | orchestrator | Tuesday 31 March 2026 04:23:31 +0000 (0:00:03.100) 0:00:09.405 ********* 2026-03-31 04:23:36.329892 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-03-31 04:23:36.329906 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-03-31 04:23:36.329914 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-03-31 04:23:36.329922 | orchestrator | 2026-03-31 04:23:36.329929 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-31 04:23:36.329953 | orchestrator | Tuesday 31 March 2026 04:23:32 +0000 (0:00:01.091) 0:00:10.496 ********* 2026-03-31 04:23:36.329962 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-03-31 04:23:36.329970 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-03-31 04:23:36.329978 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-03-31 04:23:36.329986 | orchestrator | 2026-03-31 04:23:36.329993 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-31 04:23:36.330001 | orchestrator | Tuesday 31 March 2026 04:23:33 +0000 (0:00:01.218) 0:00:11.715 ********* 2026-03-31 04:23:36.330009 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-03-31 04:23:36.330139 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:23:36.330150 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-03-31 04:23:36.330157 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:23:36.330165 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-03-31 04:23:36.330173 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:23:36.330179 | orchestrator | 2026-03-31 04:23:36.330196 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-03-31 04:23:36.330203 | orchestrator | Tuesday 31 March 2026 04:23:34 +0000 (0:00:00.936) 0:00:12.651 ********* 2026-03-31 04:23:36.330213 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-31 04:23:36.330223 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-31 04:23:36.330231 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-31 04:23:36.330238 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-31 04:23:36.330251 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-31 04:23:36.330266 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-31 04:23:42.904185 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-31 04:23:42.904276 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-31 04:23:42.904290 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-31 04:23:42.904300 | orchestrator | 2026-03-31 04:23:42.904314 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-03-31 04:23:42.904329 | orchestrator | Tuesday 31 March 2026 04:23:36 +0000 (0:00:01.899) 0:00:14.550 ********* 2026-03-31 04:23:42.904346 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:23:42.904358 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:23:42.904369 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:23:42.904380 | orchestrator | 2026-03-31 04:23:42.904391 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-03-31 04:23:42.904402 | orchestrator | Tuesday 31 March 2026 04:23:37 +0000 (0:00:01.070) 0:00:15.621 ********* 2026-03-31 04:23:42.904412 | orchestrator | ok: [testbed-node-0] => (item=users) 2026-03-31 04:23:42.904424 | orchestrator | ok: [testbed-node-1] => (item=users) 2026-03-31 04:23:42.904436 | orchestrator | ok: [testbed-node-2] => (item=users) 2026-03-31 04:23:42.904447 | orchestrator | ok: [testbed-node-0] => (item=rules) 2026-03-31 04:23:42.904458 | orchestrator | ok: [testbed-node-1] => (item=rules) 2026-03-31 04:23:42.904470 | orchestrator | ok: [testbed-node-2] => (item=rules) 2026-03-31 04:23:42.904482 | orchestrator | 2026-03-31 04:23:42.904494 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-03-31 04:23:42.904506 | orchestrator | Tuesday 31 March 2026 04:23:39 +0000 (0:00:02.001) 0:00:17.623 ********* 2026-03-31 04:23:42.904516 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:23:42.904523 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:23:42.904530 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:23:42.904536 | orchestrator | 2026-03-31 04:23:42.904543 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-03-31 04:23:42.904550 | orchestrator | Tuesday 31 March 2026 04:23:40 +0000 (0:00:01.378) 0:00:19.001 ********* 2026-03-31 04:23:42.904557 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:23:42.904583 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:23:42.904617 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:23:42.904624 | orchestrator | 2026-03-31 04:23:42.904631 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-03-31 04:23:42.904638 | orchestrator | Tuesday 31 March 2026 04:23:42 +0000 (0:00:01.317) 0:00:20.318 ********* 2026-03-31 04:23:42.904658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-31 04:23:42.904686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 04:23:42.904695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 04:23:42.904703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3deb233d3cf18704056138b6e2a24a003720b120', '__omit_place_holder__3deb233d3cf18704056138b6e2a24a003720b120'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-31 04:23:42.904711 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:23:42.904723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-31 04:23:42.904734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 04:23:42.904754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 04:23:42.904772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3deb233d3cf18704056138b6e2a24a003720b120', '__omit_place_holder__3deb233d3cf18704056138b6e2a24a003720b120'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-31 04:23:42.904790 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:23:46.296100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-31 04:23:46.296205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 04:23:46.296224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 04:23:46.296237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3deb233d3cf18704056138b6e2a24a003720b120', '__omit_place_holder__3deb233d3cf18704056138b6e2a24a003720b120'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-31 04:23:46.296274 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:23:46.296289 | orchestrator | 2026-03-31 04:23:46.296301 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-03-31 04:23:46.296314 | orchestrator | Tuesday 31 March 2026 04:23:42 +0000 (0:00:00.811) 0:00:21.129 ********* 2026-03-31 04:23:46.296326 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-31 04:23:46.296339 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-31 04:23:46.296368 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-31 04:23:46.296381 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-31 04:23:46.296393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 04:23:46.296405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3deb233d3cf18704056138b6e2a24a003720b120', '__omit_place_holder__3deb233d3cf18704056138b6e2a24a003720b120'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-31 04:23:46.296472 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-31 04:23:46.296491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 04:23:46.296512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3deb233d3cf18704056138b6e2a24a003720b120', '__omit_place_holder__3deb233d3cf18704056138b6e2a24a003720b120'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-31 04:23:55.312183 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-31 04:23:55.312287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 04:23:55.312300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3deb233d3cf18704056138b6e2a24a003720b120', '__omit_place_holder__3deb233d3cf18704056138b6e2a24a003720b120'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-31 04:23:55.312331 | orchestrator | 2026-03-31 04:23:55.312342 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-03-31 04:23:55.312352 | orchestrator | Tuesday 31 March 2026 04:23:46 +0000 (0:00:03.392) 0:00:24.522 ********* 2026-03-31 04:23:55.312361 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-31 04:23:55.312372 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-31 04:23:55.312381 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-31 04:23:55.312403 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-31 04:23:55.312412 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-31 04:23:55.312420 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-31 04:23:55.312440 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-31 04:23:55.312450 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-31 04:23:55.312472 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-31 04:23:55.312481 | orchestrator | 2026-03-31 04:23:55.312489 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-03-31 04:23:55.312497 | orchestrator | Tuesday 31 March 2026 04:23:49 +0000 (0:00:03.564) 0:00:28.086 ********* 2026-03-31 04:23:55.312506 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-31 04:23:55.312515 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-31 04:23:55.312523 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-31 04:23:55.312531 | orchestrator | 2026-03-31 04:23:55.312539 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-03-31 04:23:55.312547 | orchestrator | Tuesday 31 March 2026 04:23:51 +0000 (0:00:01.928) 0:00:30.015 ********* 2026-03-31 04:23:55.312560 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-31 04:24:08.090295 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-31 04:24:08.090389 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-31 04:24:08.090399 | orchestrator | 2026-03-31 04:24:08.090407 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-03-31 04:24:08.090414 | orchestrator | Tuesday 31 March 2026 04:23:55 +0000 (0:00:03.525) 0:00:33.540 ********* 2026-03-31 04:24:08.090421 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:24:08.090429 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:24:08.090435 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:24:08.090441 | orchestrator | 2026-03-31 04:24:08.090448 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-03-31 04:24:08.090472 | orchestrator | Tuesday 31 March 2026 04:23:56 +0000 (0:00:01.247) 0:00:34.788 ********* 2026-03-31 04:24:08.090479 | orchestrator | ok: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-31 04:24:08.090486 | orchestrator | ok: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-31 04:24:08.090493 | orchestrator | ok: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-31 04:24:08.090500 | orchestrator | 2026-03-31 04:24:08.090506 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-03-31 04:24:08.090512 | orchestrator | Tuesday 31 March 2026 04:23:58 +0000 (0:00:02.129) 0:00:36.917 ********* 2026-03-31 04:24:08.090519 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-31 04:24:08.090525 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-31 04:24:08.090531 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-31 04:24:08.090538 | orchestrator | 2026-03-31 04:24:08.090544 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-03-31 04:24:08.090550 | orchestrator | Tuesday 31 March 2026 04:24:00 +0000 (0:00:01.797) 0:00:38.715 ********* 2026-03-31 04:24:08.090557 | orchestrator | ok: [testbed-node-0] => (item=haproxy.pem) 2026-03-31 04:24:08.090564 | orchestrator | ok: [testbed-node-1] => (item=haproxy.pem) 2026-03-31 04:24:08.090636 | orchestrator | ok: [testbed-node-2] => (item=haproxy.pem) 2026-03-31 04:24:08.090644 | orchestrator | 2026-03-31 04:24:08.090650 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-03-31 04:24:08.090657 | orchestrator | Tuesday 31 March 2026 04:24:02 +0000 (0:00:01.576) 0:00:40.292 ********* 2026-03-31 04:24:08.090663 | orchestrator | ok: [testbed-node-0] => (item=haproxy-internal.pem) 2026-03-31 04:24:08.090670 | orchestrator | ok: [testbed-node-1] => (item=haproxy-internal.pem) 2026-03-31 04:24:08.090688 | orchestrator | ok: [testbed-node-2] => (item=haproxy-internal.pem) 2026-03-31 04:24:08.090701 | orchestrator | 2026-03-31 04:24:08.090715 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-31 04:24:08.090726 | orchestrator | Tuesday 31 March 2026 04:24:04 +0000 (0:00:02.050) 0:00:42.343 ********* 2026-03-31 04:24:08.090736 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 04:24:08.090745 | orchestrator | 2026-03-31 04:24:08.090755 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-03-31 04:24:08.090765 | orchestrator | Tuesday 31 March 2026 04:24:04 +0000 (0:00:00.668) 0:00:43.011 ********* 2026-03-31 04:24:08.090794 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-31 04:24:08.090808 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-31 04:24:08.090841 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-31 04:24:08.090853 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-31 04:24:08.090864 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-31 04:24:08.090875 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-31 04:24:08.090886 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-31 04:24:08.090904 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-31 04:24:08.090916 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-31 04:24:08.090934 | orchestrator | 2026-03-31 04:24:08.090945 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-03-31 04:24:08.090959 | orchestrator | Tuesday 31 March 2026 04:24:08 +0000 (0:00:03.292) 0:00:46.304 ********* 2026-03-31 04:24:10.060487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-31 04:24:10.060560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 04:24:10.060623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 04:24:10.060631 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:24:10.060637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-31 04:24:10.060643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 04:24:10.060647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 04:24:10.060669 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:24:10.060696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-31 04:24:10.060701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 04:24:10.060706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 04:24:10.060710 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:24:10.060714 | orchestrator | 2026-03-31 04:24:10.060720 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-03-31 04:24:10.060726 | orchestrator | Tuesday 31 March 2026 04:24:09 +0000 (0:00:01.037) 0:00:47.341 ********* 2026-03-31 04:24:10.060731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-31 04:24:10.060742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 04:24:10.060753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 04:24:10.060758 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:24:10.060766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-31 04:24:11.235000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 04:24:11.235069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 04:24:11.235076 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:24:11.235082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-31 04:24:11.235087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 04:24:11.235103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 04:24:11.235121 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:24:11.235126 | orchestrator | 2026-03-31 04:24:11.235131 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-31 04:24:11.235137 | orchestrator | Tuesday 31 March 2026 04:24:10 +0000 (0:00:00.951) 0:00:48.293 ********* 2026-03-31 04:24:11.235141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-31 04:24:11.235157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 04:24:11.235162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 04:24:11.235166 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:24:11.235171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-31 04:24:11.235176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 04:24:11.235184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 04:24:11.235189 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:24:11.235197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-31 04:24:11.235204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 04:24:13.234516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 04:24:13.234654 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:24:13.234665 | orchestrator | 2026-03-31 04:24:13.234673 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-31 04:24:13.234680 | orchestrator | Tuesday 31 March 2026 04:24:11 +0000 (0:00:01.170) 0:00:49.463 ********* 2026-03-31 04:24:13.234688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-31 04:24:13.234696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 04:24:13.234719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 04:24:13.234725 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:24:13.234743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-31 04:24:13.234749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 04:24:13.234767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 04:24:13.234773 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:24:13.234780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-31 04:24:13.234786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 04:24:13.234796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 04:24:13.234802 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:24:13.234808 | orchestrator | 2026-03-31 04:24:13.234813 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-31 04:24:13.234819 | orchestrator | Tuesday 31 March 2026 04:24:12 +0000 (0:00:01.002) 0:00:50.466 ********* 2026-03-31 04:24:13.234828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-31 04:24:13.234834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 04:24:13.234845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 04:24:14.626315 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:24:14.626429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-31 04:24:14.626444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 04:24:14.626473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 04:24:14.626482 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:24:14.626491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-31 04:24:14.626513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 04:24:14.626522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 04:24:14.626531 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:24:14.626539 | orchestrator | 2026-03-31 04:24:14.626549 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-03-31 04:24:14.626558 | orchestrator | Tuesday 31 March 2026 04:24:13 +0000 (0:00:01.001) 0:00:51.467 ********* 2026-03-31 04:24:14.626622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-31 04:24:14.626632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 04:24:14.626647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 04:24:14.626656 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:24:14.626664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-31 04:24:14.626673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 04:24:14.626682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 04:24:14.626690 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:24:14.626704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-31 04:24:16.431491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 04:24:16.431707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 04:24:16.431739 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:24:16.431755 | orchestrator | 2026-03-31 04:24:16.431771 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-03-31 04:24:16.431805 | orchestrator | Tuesday 31 March 2026 04:24:14 +0000 (0:00:01.383) 0:00:52.851 ********* 2026-03-31 04:24:16.431848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-31 04:24:16.431867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 04:24:16.431877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 04:24:16.431884 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:24:16.431892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-31 04:24:16.431917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 04:24:16.431933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 04:24:16.431940 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:24:16.431948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-31 04:24:16.431956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 04:24:16.431967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 04:24:16.431974 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:24:16.431981 | orchestrator | 2026-03-31 04:24:16.431988 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-03-31 04:24:16.431995 | orchestrator | Tuesday 31 March 2026 04:24:15 +0000 (0:00:00.749) 0:00:53.601 ********* 2026-03-31 04:24:16.432002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-31 04:24:16.432017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 04:24:25.915855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 04:24:25.915977 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:24:25.915999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-31 04:24:25.916015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 04:24:25.916028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 04:24:25.916056 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:24:25.916069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-31 04:24:25.916081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-31 04:24:25.916141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-31 04:24:25.916155 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:24:25.916166 | orchestrator | 2026-03-31 04:24:25.916179 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-03-31 04:24:25.916192 | orchestrator | Tuesday 31 March 2026 04:24:16 +0000 (0:00:01.062) 0:00:54.663 ********* 2026-03-31 04:24:25.916204 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-31 04:24:25.916221 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-31 04:24:25.916241 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-31 04:24:25.916272 | orchestrator | 2026-03-31 04:24:25.916291 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-03-31 04:24:25.916309 | orchestrator | Tuesday 31 March 2026 04:24:18 +0000 (0:00:02.058) 0:00:56.722 ********* 2026-03-31 04:24:25.916329 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-31 04:24:25.916347 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-31 04:24:25.916366 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-31 04:24:25.916384 | orchestrator | 2026-03-31 04:24:25.916402 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-03-31 04:24:25.916418 | orchestrator | Tuesday 31 March 2026 04:24:20 +0000 (0:00:01.626) 0:00:58.348 ********* 2026-03-31 04:24:25.916435 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-31 04:24:25.916453 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-31 04:24:25.916470 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-31 04:24:25.916488 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-31 04:24:25.916507 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:24:25.916526 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-31 04:24:25.916546 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:24:25.916632 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-31 04:24:25.916651 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:24:25.916670 | orchestrator | 2026-03-31 04:24:25.916689 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-03-31 04:24:25.916708 | orchestrator | Tuesday 31 March 2026 04:24:21 +0000 (0:00:01.058) 0:00:59.407 ********* 2026-03-31 04:24:25.916740 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-31 04:24:25.916769 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-31 04:24:25.916795 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-31 04:24:31.248088 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-31 04:24:31.248201 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-31 04:24:31.248218 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-31 04:24:31.248248 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-31 04:24:31.248313 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-31 04:24:31.248327 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-31 04:24:31.248340 | orchestrator | 2026-03-31 04:24:31.248354 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-03-31 04:24:31.248367 | orchestrator | Tuesday 31 March 2026 04:24:25 +0000 (0:00:04.738) 0:01:04.145 ********* 2026-03-31 04:24:31.248379 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 04:24:31.248390 | orchestrator | 2026-03-31 04:24:31.248401 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-03-31 04:24:31.248413 | orchestrator | Tuesday 31 March 2026 04:24:26 +0000 (0:00:00.752) 0:01:04.897 ********* 2026-03-31 04:24:31.248445 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-31 04:24:31.248460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-31 04:24:31.248472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-31 04:24:31.248491 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-31 04:24:31.248512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-31 04:24:31.248523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-31 04:24:31.248542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-31 04:24:32.065281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-31 04:24:32.065374 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-31 04:24:32.065410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-31 04:24:32.065420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-31 04:24:32.065427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-31 04:24:32.065434 | orchestrator | 2026-03-31 04:24:32.065443 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-03-31 04:24:32.065451 | orchestrator | Tuesday 31 March 2026 04:24:31 +0000 (0:00:04.569) 0:01:09.467 ********* 2026-03-31 04:24:32.065475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-31 04:24:32.065483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-31 04:24:32.065491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-31 04:24:32.065504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-31 04:24:32.065512 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:24:32.065520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-31 04:24:32.065542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-31 04:24:32.065599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-31 04:24:32.065617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-31 04:24:42.113975 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:24:42.114091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-31 04:24:42.114124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-31 04:24:42.114130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-31 04:24:42.114135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-31 04:24:42.114139 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:24:42.114143 | orchestrator | 2026-03-31 04:24:42.114148 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-03-31 04:24:42.114152 | orchestrator | Tuesday 31 March 2026 04:24:32 +0000 (0:00:00.831) 0:01:10.298 ********* 2026-03-31 04:24:42.114157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-31 04:24:42.114164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-31 04:24:42.114169 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:24:42.114173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-31 04:24:42.114177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-31 04:24:42.114181 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:24:42.114185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-31 04:24:42.114199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-31 04:24:42.114207 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:24:42.114211 | orchestrator | 2026-03-31 04:24:42.114215 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-03-31 04:24:42.114219 | orchestrator | Tuesday 31 March 2026 04:24:33 +0000 (0:00:01.030) 0:01:11.329 ********* 2026-03-31 04:24:42.114223 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:24:42.114228 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:24:42.114231 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:24:42.114235 | orchestrator | 2026-03-31 04:24:42.114239 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-03-31 04:24:42.114243 | orchestrator | Tuesday 31 March 2026 04:24:34 +0000 (0:00:01.780) 0:01:13.110 ********* 2026-03-31 04:24:42.114247 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:24:42.114250 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:24:42.114254 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:24:42.114258 | orchestrator | 2026-03-31 04:24:42.114262 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-03-31 04:24:42.114266 | orchestrator | Tuesday 31 March 2026 04:24:37 +0000 (0:00:02.344) 0:01:15.454 ********* 2026-03-31 04:24:42.114270 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 04:24:42.114274 | orchestrator | 2026-03-31 04:24:42.114278 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-03-31 04:24:42.114281 | orchestrator | Tuesday 31 March 2026 04:24:37 +0000 (0:00:00.744) 0:01:16.198 ********* 2026-03-31 04:24:42.114289 | orchestrator | ok: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-31 04:24:42.114295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-31 04:24:42.114300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-31 04:24:42.114307 | orchestrator | ok: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-31 04:24:42.859530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-31 04:24:42.859679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-31 04:24:42.859720 | orchestrator | ok: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-31 04:24:42.859740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-31 04:24:42.859758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-31 04:24:42.859803 | orchestrator | 2026-03-31 04:24:42.859823 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-03-31 04:24:42.859841 | orchestrator | Tuesday 31 March 2026 04:24:42 +0000 (0:00:04.139) 0:01:20.338 ********* 2026-03-31 04:24:42.859882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-31 04:24:42.859900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-31 04:24:42.859925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-31 04:24:42.859943 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:24:42.859963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-31 04:24:42.859980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-31 04:24:42.860008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-31 04:24:42.860025 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:24:42.860056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-31 04:24:54.458092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-31 04:24:54.458226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-31 04:24:54.458247 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:24:54.458261 | orchestrator | 2026-03-31 04:24:54.458274 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-03-31 04:24:54.458287 | orchestrator | Tuesday 31 March 2026 04:24:42 +0000 (0:00:00.750) 0:01:21.089 ********* 2026-03-31 04:24:54.458299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-31 04:24:54.458312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-31 04:24:54.458345 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:24:54.458357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-31 04:24:54.458369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-31 04:24:54.458380 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:24:54.458391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-31 04:24:54.458403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-31 04:24:54.458414 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:24:54.458425 | orchestrator | 2026-03-31 04:24:54.458436 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-03-31 04:24:54.458447 | orchestrator | Tuesday 31 March 2026 04:24:43 +0000 (0:00:01.023) 0:01:22.112 ********* 2026-03-31 04:24:54.458458 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:24:54.458470 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:24:54.458481 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:24:54.458491 | orchestrator | 2026-03-31 04:24:54.458502 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-03-31 04:24:54.458515 | orchestrator | Tuesday 31 March 2026 04:24:45 +0000 (0:00:01.781) 0:01:23.894 ********* 2026-03-31 04:24:54.458528 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:24:54.458587 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:24:54.458600 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:24:54.458613 | orchestrator | 2026-03-31 04:24:54.458625 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-03-31 04:24:54.458638 | orchestrator | Tuesday 31 March 2026 04:24:48 +0000 (0:00:02.425) 0:01:26.319 ********* 2026-03-31 04:24:54.458650 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:24:54.458663 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:24:54.458675 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:24:54.458688 | orchestrator | 2026-03-31 04:24:54.458707 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-03-31 04:24:54.458728 | orchestrator | Tuesday 31 March 2026 04:24:48 +0000 (0:00:00.351) 0:01:26.671 ********* 2026-03-31 04:24:54.458748 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 04:24:54.458766 | orchestrator | 2026-03-31 04:24:54.458779 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-03-31 04:24:54.458816 | orchestrator | Tuesday 31 March 2026 04:24:49 +0000 (0:00:01.102) 0:01:27.773 ********* 2026-03-31 04:24:54.458857 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-31 04:24:54.458900 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-31 04:24:54.458919 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-31 04:24:54.458937 | orchestrator | 2026-03-31 04:24:54.458955 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-03-31 04:24:54.458974 | orchestrator | Tuesday 31 March 2026 04:24:52 +0000 (0:00:03.049) 0:01:30.823 ********* 2026-03-31 04:24:54.458992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-31 04:24:54.459011 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:24:54.459045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-31 04:25:04.315159 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:25:04.315291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-31 04:25:04.315335 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:25:04.315349 | orchestrator | 2026-03-31 04:25:04.315362 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-03-31 04:25:04.315375 | orchestrator | Tuesday 31 March 2026 04:24:54 +0000 (0:00:01.859) 0:01:32.682 ********* 2026-03-31 04:25:04.315387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-31 04:25:04.315402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-31 04:25:04.315416 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:25:04.315427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-31 04:25:04.315469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-31 04:25:04.315482 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:25:04.315494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-31 04:25:04.315506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-31 04:25:04.315517 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:25:04.315619 | orchestrator | 2026-03-31 04:25:04.315637 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-03-31 04:25:04.315660 | orchestrator | Tuesday 31 March 2026 04:24:57 +0000 (0:00:02.802) 0:01:35.485 ********* 2026-03-31 04:25:04.315673 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:25:04.315686 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:25:04.315699 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:25:04.315712 | orchestrator | 2026-03-31 04:25:04.315726 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-03-31 04:25:04.315758 | orchestrator | Tuesday 31 March 2026 04:24:57 +0000 (0:00:00.523) 0:01:36.008 ********* 2026-03-31 04:25:04.315772 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:25:04.315785 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:25:04.315798 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:25:04.315812 | orchestrator | 2026-03-31 04:25:04.315825 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-03-31 04:25:04.315838 | orchestrator | Tuesday 31 March 2026 04:24:59 +0000 (0:00:01.528) 0:01:37.537 ********* 2026-03-31 04:25:04.315851 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 04:25:04.315873 | orchestrator | 2026-03-31 04:25:04.315892 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-03-31 04:25:04.315911 | orchestrator | Tuesday 31 March 2026 04:25:00 +0000 (0:00:01.100) 0:01:38.637 ********* 2026-03-31 04:25:04.315932 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-31 04:25:04.315956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 04:25:04.315977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-31 04:25:04.316000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-31 04:25:04.316064 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-31 04:25:05.123215 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-31 04:25:05.123335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 04:25:05.123355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 04:25:05.123369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-31 04:25:05.123406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-31 04:25:05.123453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-31 04:25:05.123467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-31 04:25:05.123479 | orchestrator | 2026-03-31 04:25:05.123493 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-03-31 04:25:05.123505 | orchestrator | Tuesday 31 March 2026 04:25:04 +0000 (0:00:04.030) 0:01:42.667 ********* 2026-03-31 04:25:05.123518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-31 04:25:05.123695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 04:25:05.123742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-31 04:25:05.123777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-31 04:25:05.123799 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:25:05.123840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-31 04:25:16.007760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 04:25:16.007871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-31 04:25:16.007886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-31 04:25:16.007920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-31 04:25:16.007932 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:25:16.007960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 04:25:16.007988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-31 04:25:16.007999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-31 04:25:16.008009 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:25:16.008019 | orchestrator | 2026-03-31 04:25:16.008030 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-03-31 04:25:16.008042 | orchestrator | Tuesday 31 March 2026 04:25:05 +0000 (0:00:00.810) 0:01:43.478 ********* 2026-03-31 04:25:16.008061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-31 04:25:16.008072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-31 04:25:16.008084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-31 04:25:16.008095 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:25:16.008104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-31 04:25:16.008114 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:25:16.008124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-31 04:25:16.008134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-31 04:25:16.008144 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:25:16.008154 | orchestrator | 2026-03-31 04:25:16.008164 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-03-31 04:25:16.008174 | orchestrator | Tuesday 31 March 2026 04:25:06 +0000 (0:00:01.703) 0:01:45.182 ********* 2026-03-31 04:25:16.008183 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:25:16.008194 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:25:16.008204 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:25:16.008213 | orchestrator | 2026-03-31 04:25:16.008223 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-03-31 04:25:16.008238 | orchestrator | Tuesday 31 March 2026 04:25:08 +0000 (0:00:01.367) 0:01:46.549 ********* 2026-03-31 04:25:16.008248 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:25:16.008257 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:25:16.008267 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:25:16.008277 | orchestrator | 2026-03-31 04:25:16.008287 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-03-31 04:25:16.008297 | orchestrator | Tuesday 31 March 2026 04:25:10 +0000 (0:00:02.029) 0:01:48.578 ********* 2026-03-31 04:25:16.008307 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:25:16.008318 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:25:16.008328 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:25:16.008337 | orchestrator | 2026-03-31 04:25:16.008347 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-03-31 04:25:16.008357 | orchestrator | Tuesday 31 March 2026 04:25:10 +0000 (0:00:00.348) 0:01:48.927 ********* 2026-03-31 04:25:16.008367 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:25:16.008377 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:25:16.008387 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:25:16.008397 | orchestrator | 2026-03-31 04:25:16.008407 | orchestrator | TASK [include_role : designate] ************************************************ 2026-03-31 04:25:16.008417 | orchestrator | Tuesday 31 March 2026 04:25:11 +0000 (0:00:00.548) 0:01:49.475 ********* 2026-03-31 04:25:16.008427 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 04:25:16.008437 | orchestrator | 2026-03-31 04:25:16.008447 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-03-31 04:25:16.008457 | orchestrator | Tuesday 31 March 2026 04:25:12 +0000 (0:00:00.827) 0:01:50.303 ********* 2026-03-31 04:25:16.008482 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-31 04:25:16.643036 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-31 04:25:16.643139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-31 04:25:16.643162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-31 04:25:16.643184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-31 04:25:16.643209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-31 04:25:16.643267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-31 04:25:16.643310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-31 04:25:16.643329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-31 04:25:16.643347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-31 04:25:16.643374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-31 04:25:16.643395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-31 04:25:16.643425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-31 04:25:16.643446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-31 04:25:17.431356 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-31 04:25:17.431461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-31 04:25:17.431487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-31 04:25:17.431495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-31 04:25:17.431578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-31 04:25:17.431587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-31 04:25:17.431611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-31 04:25:17.431618 | orchestrator | 2026-03-31 04:25:17.431627 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-03-31 04:25:17.431634 | orchestrator | Tuesday 31 March 2026 04:25:16 +0000 (0:00:04.572) 0:01:54.875 ********* 2026-03-31 04:25:17.431641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-31 04:25:17.431653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-31 04:25:17.431661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-31 04:25:17.431675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-31 04:25:17.431682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-31 04:25:17.431694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-31 04:25:17.649648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-31 04:25:17.649739 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:25:17.649753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-31 04:25:17.649765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-31 04:25:17.649800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-31 04:25:17.649810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-31 04:25:17.649819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-31 04:25:17.650439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-31 04:25:17.650472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-31 04:25:17.650495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-31 04:25:17.650513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-31 04:25:17.650580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-31 04:25:17.650590 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:25:17.650599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-31 04:25:17.650620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-31 04:25:29.879822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-31 04:25:29.879940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-31 04:25:29.879982 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:25:29.879998 | orchestrator | 2026-03-31 04:25:29.880011 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-03-31 04:25:29.880024 | orchestrator | Tuesday 31 March 2026 04:25:17 +0000 (0:00:01.003) 0:01:55.879 ********* 2026-03-31 04:25:29.880036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-31 04:25:29.880049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-31 04:25:29.880061 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:25:29.880073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-31 04:25:29.880092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-31 04:25:29.880104 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:25:29.880116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-31 04:25:29.880127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-31 04:25:29.880138 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:25:29.880149 | orchestrator | 2026-03-31 04:25:29.880160 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-03-31 04:25:29.880171 | orchestrator | Tuesday 31 March 2026 04:25:18 +0000 (0:00:01.280) 0:01:57.159 ********* 2026-03-31 04:25:29.880182 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:25:29.880194 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:25:29.880205 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:25:29.880216 | orchestrator | 2026-03-31 04:25:29.880227 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-03-31 04:25:29.880239 | orchestrator | Tuesday 31 March 2026 04:25:20 +0000 (0:00:01.894) 0:01:59.053 ********* 2026-03-31 04:25:29.880250 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:25:29.880261 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:25:29.880272 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:25:29.880282 | orchestrator | 2026-03-31 04:25:29.880293 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-03-31 04:25:29.880304 | orchestrator | Tuesday 31 March 2026 04:25:23 +0000 (0:00:02.423) 0:02:01.477 ********* 2026-03-31 04:25:29.880315 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:25:29.880327 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:25:29.880341 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:25:29.880353 | orchestrator | 2026-03-31 04:25:29.880366 | orchestrator | TASK [include_role : glance] *************************************************** 2026-03-31 04:25:29.880379 | orchestrator | Tuesday 31 March 2026 04:25:23 +0000 (0:00:00.415) 0:02:01.893 ********* 2026-03-31 04:25:29.880392 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 04:25:29.880404 | orchestrator | 2026-03-31 04:25:29.880417 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-03-31 04:25:29.880438 | orchestrator | Tuesday 31 March 2026 04:25:24 +0000 (0:00:01.280) 0:02:03.173 ********* 2026-03-31 04:25:29.880475 | orchestrator | ok: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-31 04:25:29.880499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-31 04:25:29.880524 | orchestrator | ok: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-31 04:25:33.981046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-31 04:25:33.981155 | orchestrator | ok: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-31 04:25:33.981234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-31 04:25:33.981252 | orchestrator | 2026-03-31 04:25:33.981266 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-03-31 04:25:33.981279 | orchestrator | Tuesday 31 March 2026 04:25:30 +0000 (0:00:05.106) 0:02:08.280 ********* 2026-03-31 04:25:33.981292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-31 04:25:33.981327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-31 04:25:38.544376 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:25:38.544496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-31 04:25:38.544552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-31 04:25:38.544598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-31 04:25:38.544655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-31 04:25:38.544664 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:25:38.544671 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:25:38.544677 | orchestrator | 2026-03-31 04:25:38.544685 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-03-31 04:25:38.544692 | orchestrator | Tuesday 31 March 2026 04:25:34 +0000 (0:00:04.050) 0:02:12.330 ********* 2026-03-31 04:25:38.544700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-31 04:25:38.544719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-31 04:25:48.691935 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:25:48.692767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-31 04:25:48.692823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-31 04:25:48.692837 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:25:48.692849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-31 04:25:48.692861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-31 04:25:48.692873 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:25:48.692884 | orchestrator | 2026-03-31 04:25:48.692898 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-03-31 04:25:48.692908 | orchestrator | Tuesday 31 March 2026 04:25:38 +0000 (0:00:04.444) 0:02:16.775 ********* 2026-03-31 04:25:48.692915 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:25:48.692923 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:25:48.692930 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:25:48.692937 | orchestrator | 2026-03-31 04:25:48.692943 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-03-31 04:25:48.692950 | orchestrator | Tuesday 31 March 2026 04:25:40 +0000 (0:00:01.500) 0:02:18.275 ********* 2026-03-31 04:25:48.692957 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:25:48.692963 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:25:48.692970 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:25:48.692977 | orchestrator | 2026-03-31 04:25:48.692983 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-03-31 04:25:48.692990 | orchestrator | Tuesday 31 March 2026 04:25:42 +0000 (0:00:02.409) 0:02:20.684 ********* 2026-03-31 04:25:48.692997 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:25:48.693004 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:25:48.693012 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:25:48.693019 | orchestrator | 2026-03-31 04:25:48.693026 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-03-31 04:25:48.693034 | orchestrator | Tuesday 31 March 2026 04:25:42 +0000 (0:00:00.386) 0:02:21.071 ********* 2026-03-31 04:25:48.693041 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 04:25:48.693048 | orchestrator | 2026-03-31 04:25:48.693056 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-03-31 04:25:48.693063 | orchestrator | Tuesday 31 March 2026 04:25:44 +0000 (0:00:01.252) 0:02:22.323 ********* 2026-03-31 04:25:48.693084 | orchestrator | ok: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-31 04:25:48.693119 | orchestrator | ok: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-31 04:25:48.693128 | orchestrator | ok: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-31 04:25:48.693136 | orchestrator | 2026-03-31 04:25:48.693144 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-03-31 04:25:48.693152 | orchestrator | Tuesday 31 March 2026 04:25:47 +0000 (0:00:03.677) 0:02:26.001 ********* 2026-03-31 04:25:48.693159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-31 04:25:48.693168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-31 04:25:48.693176 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:25:48.693183 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:25:48.693191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-31 04:25:48.693204 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:25:48.693212 | orchestrator | 2026-03-31 04:25:48.693223 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-03-31 04:25:48.693231 | orchestrator | Tuesday 31 March 2026 04:25:48 +0000 (0:00:00.463) 0:02:26.464 ********* 2026-03-31 04:25:48.693239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-31 04:25:48.693253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-31 04:25:59.179289 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:25:59.180252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-31 04:25:59.180297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-31 04:25:59.180313 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:25:59.180326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-31 04:25:59.180337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-31 04:25:59.180348 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:25:59.180360 | orchestrator | 2026-03-31 04:25:59.180372 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-03-31 04:25:59.180385 | orchestrator | Tuesday 31 March 2026 04:25:49 +0000 (0:00:01.151) 0:02:27.615 ********* 2026-03-31 04:25:59.180396 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:25:59.180408 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:25:59.180419 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:25:59.180430 | orchestrator | 2026-03-31 04:25:59.180441 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-03-31 04:25:59.180452 | orchestrator | Tuesday 31 March 2026 04:25:50 +0000 (0:00:01.364) 0:02:28.980 ********* 2026-03-31 04:25:59.180463 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:25:59.180474 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:25:59.180485 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:25:59.180496 | orchestrator | 2026-03-31 04:25:59.180507 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-03-31 04:25:59.180518 | orchestrator | Tuesday 31 March 2026 04:25:53 +0000 (0:00:02.456) 0:02:31.437 ********* 2026-03-31 04:25:59.180529 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:25:59.180540 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:25:59.180552 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:25:59.180563 | orchestrator | 2026-03-31 04:25:59.180581 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-03-31 04:25:59.180601 | orchestrator | Tuesday 31 March 2026 04:25:53 +0000 (0:00:00.716) 0:02:32.153 ********* 2026-03-31 04:25:59.180620 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 04:25:59.180638 | orchestrator | 2026-03-31 04:25:59.180658 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-03-31 04:25:59.180675 | orchestrator | Tuesday 31 March 2026 04:25:55 +0000 (0:00:01.099) 0:02:33.252 ********* 2026-03-31 04:25:59.180759 | orchestrator | ok: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-31 04:25:59.180938 | orchestrator | ok: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-31 04:25:59.181001 | orchestrator | ok: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-31 04:26:01.596245 | orchestrator | 2026-03-31 04:26:01.596335 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-03-31 04:26:01.596347 | orchestrator | Tuesday 31 March 2026 04:25:59 +0000 (0:00:04.153) 0:02:37.406 ********* 2026-03-31 04:26:01.596360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-31 04:26:01.596390 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:26:01.596436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-31 04:26:01.596453 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:26:01.596466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-31 04:26:01.596489 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:26:01.596502 | orchestrator | 2026-03-31 04:26:01.596515 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-03-31 04:26:01.596523 | orchestrator | Tuesday 31 March 2026 04:26:00 +0000 (0:00:01.288) 0:02:38.694 ********* 2026-03-31 04:26:01.596537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-31 04:26:01.596547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-31 04:26:01.596558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-31 04:26:01.596574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-31 04:26:12.498479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-31 04:26:12.498590 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:26:12.498609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-31 04:26:12.498626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-31 04:26:12.498641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-31 04:26:12.498678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-31 04:26:12.498691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-31 04:26:12.498702 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:26:12.498714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-31 04:26:12.498725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-31 04:26:12.498736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-31 04:26:12.498763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-31 04:26:12.498775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-31 04:26:12.498786 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:26:12.498830 | orchestrator | 2026-03-31 04:26:12.498843 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-03-31 04:26:12.498855 | orchestrator | Tuesday 31 March 2026 04:26:01 +0000 (0:00:01.127) 0:02:39.821 ********* 2026-03-31 04:26:12.498867 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:26:12.498878 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:26:12.498889 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:26:12.498900 | orchestrator | 2026-03-31 04:26:12.498911 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-03-31 04:26:12.498922 | orchestrator | Tuesday 31 March 2026 04:26:03 +0000 (0:00:02.152) 0:02:41.973 ********* 2026-03-31 04:26:12.498933 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:26:12.498944 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:26:12.498955 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:26:12.498967 | orchestrator | 2026-03-31 04:26:12.498986 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-03-31 04:26:12.499007 | orchestrator | Tuesday 31 March 2026 04:26:06 +0000 (0:00:02.407) 0:02:44.381 ********* 2026-03-31 04:26:12.499023 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:26:12.499041 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:26:12.499059 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:26:12.499075 | orchestrator | 2026-03-31 04:26:12.499113 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-03-31 04:26:12.499131 | orchestrator | Tuesday 31 March 2026 04:26:06 +0000 (0:00:00.377) 0:02:44.758 ********* 2026-03-31 04:26:12.499164 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:26:12.499180 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:26:12.499197 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:26:12.499215 | orchestrator | 2026-03-31 04:26:12.499233 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-03-31 04:26:12.499250 | orchestrator | Tuesday 31 March 2026 04:26:06 +0000 (0:00:00.382) 0:02:45.140 ********* 2026-03-31 04:26:12.499267 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 04:26:12.499286 | orchestrator | 2026-03-31 04:26:12.499305 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-03-31 04:26:12.499322 | orchestrator | Tuesday 31 March 2026 04:26:08 +0000 (0:00:01.490) 0:02:46.631 ********* 2026-03-31 04:26:12.499347 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-31 04:26:12.499375 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-31 04:26:12.499407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-31 04:26:12.499430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-31 04:26:12.499477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-31 04:26:13.204685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-31 04:26:13.204876 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-31 04:26:13.204904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-31 04:26:13.204936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-31 04:26:13.204950 | orchestrator | 2026-03-31 04:26:13.204964 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-03-31 04:26:13.204976 | orchestrator | Tuesday 31 March 2026 04:26:12 +0000 (0:00:04.093) 0:02:50.725 ********* 2026-03-31 04:26:13.205033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-31 04:26:13.205048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-31 04:26:13.205061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-31 04:26:13.205073 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:26:13.205086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-31 04:26:13.205105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-31 04:26:13.205125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-31 04:26:13.205137 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:26:13.205157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-31 04:26:24.683461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-31 04:26:24.683553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-31 04:26:24.683562 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:26:24.683571 | orchestrator | 2026-03-31 04:26:24.683578 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-03-31 04:26:24.683585 | orchestrator | Tuesday 31 March 2026 04:26:13 +0000 (0:00:00.705) 0:02:51.431 ********* 2026-03-31 04:26:24.683593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-31 04:26:24.683614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-31 04:26:24.683635 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:26:24.683642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-31 04:26:24.683648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-31 04:26:24.683655 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:26:24.683665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-31 04:26:24.683675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-31 04:26:24.683684 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:26:24.683694 | orchestrator | 2026-03-31 04:26:24.683704 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-03-31 04:26:24.683713 | orchestrator | Tuesday 31 March 2026 04:26:14 +0000 (0:00:01.341) 0:02:52.773 ********* 2026-03-31 04:26:24.683722 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:26:24.683732 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:26:24.683741 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:26:24.683750 | orchestrator | 2026-03-31 04:26:24.683759 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-03-31 04:26:24.683768 | orchestrator | Tuesday 31 March 2026 04:26:15 +0000 (0:00:01.453) 0:02:54.226 ********* 2026-03-31 04:26:24.683778 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:26:24.683787 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:26:24.683796 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:26:24.683806 | orchestrator | 2026-03-31 04:26:24.683816 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-03-31 04:26:24.683826 | orchestrator | Tuesday 31 March 2026 04:26:18 +0000 (0:00:02.386) 0:02:56.613 ********* 2026-03-31 04:26:24.683836 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:26:24.683846 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:26:24.683912 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:26:24.683920 | orchestrator | 2026-03-31 04:26:24.683926 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-03-31 04:26:24.683945 | orchestrator | Tuesday 31 March 2026 04:26:19 +0000 (0:00:00.708) 0:02:57.322 ********* 2026-03-31 04:26:24.683952 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 04:26:24.683958 | orchestrator | 2026-03-31 04:26:24.683963 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-03-31 04:26:24.683970 | orchestrator | Tuesday 31 March 2026 04:26:20 +0000 (0:00:01.204) 0:02:58.526 ********* 2026-03-31 04:26:24.683977 | orchestrator | ok: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-31 04:26:24.683999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-31 04:26:24.684007 | orchestrator | ok: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-31 04:26:24.684013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-31 04:26:24.684026 | orchestrator | ok: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-31 04:26:31.186563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-31 04:26:31.186689 | orchestrator | 2026-03-31 04:26:31.186705 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-03-31 04:26:31.186716 | orchestrator | Tuesday 31 March 2026 04:26:24 +0000 (0:00:04.382) 0:03:02.909 ********* 2026-03-31 04:26:31.186742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-31 04:26:31.186754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-31 04:26:31.186763 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:26:31.186774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-31 04:26:31.186800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-31 04:26:31.186821 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:26:31.186831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-31 04:26:31.186845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-31 04:26:31.186855 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:26:31.186864 | orchestrator | 2026-03-31 04:26:31.186873 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-03-31 04:26:31.186882 | orchestrator | Tuesday 31 March 2026 04:26:25 +0000 (0:00:01.280) 0:03:04.190 ********* 2026-03-31 04:26:31.186950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-31 04:26:31.186963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-31 04:26:31.186974 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:26:31.186983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-31 04:26:31.186992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-31 04:26:31.187001 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:26:31.187010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-31 04:26:31.187020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-31 04:26:31.187030 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:26:31.187041 | orchestrator | 2026-03-31 04:26:31.187051 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-03-31 04:26:31.187062 | orchestrator | Tuesday 31 March 2026 04:26:26 +0000 (0:00:01.015) 0:03:05.205 ********* 2026-03-31 04:26:31.187073 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:26:31.187084 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:26:31.187101 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:26:31.187112 | orchestrator | 2026-03-31 04:26:31.187122 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-03-31 04:26:31.187132 | orchestrator | Tuesday 31 March 2026 04:26:28 +0000 (0:00:01.410) 0:03:06.616 ********* 2026-03-31 04:26:31.187142 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:26:31.187151 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:26:31.187160 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:26:31.187169 | orchestrator | 2026-03-31 04:26:31.187178 | orchestrator | TASK [include_role : manila] *************************************************** 2026-03-31 04:26:31.187194 | orchestrator | Tuesday 31 March 2026 04:26:31 +0000 (0:00:02.795) 0:03:09.411 ********* 2026-03-31 04:26:36.502912 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 04:26:36.503040 | orchestrator | 2026-03-31 04:26:36.503051 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-03-31 04:26:36.503059 | orchestrator | Tuesday 31 March 2026 04:26:32 +0000 (0:00:01.203) 0:03:10.615 ********* 2026-03-31 04:26:36.503070 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-31 04:26:36.503082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 04:26:36.503091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-31 04:26:36.503100 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-31 04:26:36.503132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-31 04:26:36.503155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 04:26:36.503163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-31 04:26:36.503218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-31 04:26:36.503226 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-31 04:26:36.503232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 04:26:36.503245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-31 04:26:36.503259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-31 04:26:37.735053 | orchestrator | 2026-03-31 04:26:37.735182 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-03-31 04:26:37.735212 | orchestrator | Tuesday 31 March 2026 04:26:36 +0000 (0:00:04.237) 0:03:14.853 ********* 2026-03-31 04:26:37.735238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-31 04:26:37.735282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 04:26:37.735306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-31 04:26:37.735327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-31 04:26:37.735378 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:26:37.735400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-31 04:26:37.735442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 04:26:37.735463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-31 04:26:37.735491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-31 04:26:37.735512 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:26:37.735531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-31 04:26:37.735572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 04:26:37.735591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-31 04:26:37.735623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-31 04:26:51.756898 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:26:51.757075 | orchestrator | 2026-03-31 04:26:51.757108 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-03-31 04:26:51.757124 | orchestrator | Tuesday 31 March 2026 04:26:37 +0000 (0:00:01.232) 0:03:16.086 ********* 2026-03-31 04:26:51.757136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-31 04:26:51.757150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-31 04:26:51.757162 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:26:51.757174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-31 04:26:51.757201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-31 04:26:51.757214 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:26:51.757225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-31 04:26:51.757237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-31 04:26:51.757273 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:26:51.757286 | orchestrator | 2026-03-31 04:26:51.757297 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-03-31 04:26:51.757309 | orchestrator | Tuesday 31 March 2026 04:26:38 +0000 (0:00:01.136) 0:03:17.222 ********* 2026-03-31 04:26:51.757364 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:26:51.757379 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:26:51.757390 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:26:51.757401 | orchestrator | 2026-03-31 04:26:51.757412 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-03-31 04:26:51.757424 | orchestrator | Tuesday 31 March 2026 04:26:41 +0000 (0:00:02.226) 0:03:19.449 ********* 2026-03-31 04:26:51.757437 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:26:51.757450 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:26:51.757462 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:26:51.757474 | orchestrator | 2026-03-31 04:26:51.757487 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-03-31 04:26:51.757499 | orchestrator | Tuesday 31 March 2026 04:26:43 +0000 (0:00:02.514) 0:03:21.963 ********* 2026-03-31 04:26:51.757512 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 04:26:51.757525 | orchestrator | 2026-03-31 04:26:51.757537 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-03-31 04:26:51.757551 | orchestrator | Tuesday 31 March 2026 04:26:45 +0000 (0:00:01.383) 0:03:23.346 ********* 2026-03-31 04:26:51.757563 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-03-31 04:26:51.757577 | orchestrator | 2026-03-31 04:26:51.757589 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-03-31 04:26:51.757602 | orchestrator | Tuesday 31 March 2026 04:26:48 +0000 (0:00:03.488) 0:03:26.834 ********* 2026-03-31 04:26:51.757640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-31 04:26:51.757665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-31 04:26:51.757689 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:26:51.757702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-31 04:26:51.757715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-31 04:26:51.757727 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:26:51.757754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-31 04:26:54.611859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-31 04:26:54.611969 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:26:54.611984 | orchestrator | 2026-03-31 04:26:54.611993 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-03-31 04:26:54.612003 | orchestrator | Tuesday 31 March 2026 04:26:51 +0000 (0:00:03.129) 0:03:29.964 ********* 2026-03-31 04:26:54.612070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-31 04:26:54.612087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-31 04:26:54.612129 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:26:54.612183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-31 04:26:54.612199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-31 04:26:54.612207 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:26:54.612215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-31 04:26:54.612244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-31 04:27:06.172983 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:27:06.173162 | orchestrator | 2026-03-31 04:27:06.173181 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-03-31 04:27:06.173195 | orchestrator | Tuesday 31 March 2026 04:26:54 +0000 (0:00:02.872) 0:03:32.837 ********* 2026-03-31 04:27:06.173230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-31 04:27:06.173263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-31 04:27:06.173281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-31 04:27:06.173297 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:27:06.173314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-31 04:27:06.173357 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:27:06.173374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-31 04:27:06.173409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-31 04:27:06.173426 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:27:06.173441 | orchestrator | 2026-03-31 04:27:06.173456 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-03-31 04:27:06.173470 | orchestrator | Tuesday 31 March 2026 04:26:58 +0000 (0:00:03.423) 0:03:36.261 ********* 2026-03-31 04:27:06.173479 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:27:06.173507 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:27:06.173523 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:27:06.173537 | orchestrator | 2026-03-31 04:27:06.173561 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-03-31 04:27:06.173575 | orchestrator | Tuesday 31 March 2026 04:27:00 +0000 (0:00:02.442) 0:03:38.703 ********* 2026-03-31 04:27:06.173589 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:27:06.173604 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:27:06.173617 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:27:06.173630 | orchestrator | 2026-03-31 04:27:06.173644 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-03-31 04:27:06.173658 | orchestrator | Tuesday 31 March 2026 04:27:02 +0000 (0:00:01.901) 0:03:40.604 ********* 2026-03-31 04:27:06.173671 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:27:06.173684 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:27:06.173698 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:27:06.173711 | orchestrator | 2026-03-31 04:27:06.173725 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-03-31 04:27:06.173740 | orchestrator | Tuesday 31 March 2026 04:27:02 +0000 (0:00:00.394) 0:03:40.999 ********* 2026-03-31 04:27:06.173753 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 04:27:06.173766 | orchestrator | 2026-03-31 04:27:06.173778 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-03-31 04:27:06.173792 | orchestrator | Tuesday 31 March 2026 04:27:04 +0000 (0:00:01.675) 0:03:42.675 ********* 2026-03-31 04:27:06.173806 | orchestrator | ok: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-31 04:27:06.173840 | orchestrator | ok: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-31 04:27:06.173858 | orchestrator | ok: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-31 04:27:06.173872 | orchestrator | 2026-03-31 04:27:06.173887 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-03-31 04:27:06.173902 | orchestrator | Tuesday 31 March 2026 04:27:06 +0000 (0:00:01.590) 0:03:44.265 ********* 2026-03-31 04:27:06.173942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-31 04:27:16.746794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-31 04:27:16.746907 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:27:16.746926 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:27:16.746940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-31 04:27:16.746975 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:27:16.746988 | orchestrator | 2026-03-31 04:27:16.747001 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-03-31 04:27:16.747013 | orchestrator | Tuesday 31 March 2026 04:27:06 +0000 (0:00:00.487) 0:03:44.752 ********* 2026-03-31 04:27:16.747026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-31 04:27:16.747039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-31 04:27:16.747050 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:27:16.747061 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:27:16.747073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-31 04:27:16.747084 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:27:16.747095 | orchestrator | 2026-03-31 04:27:16.747106 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-03-31 04:27:16.747153 | orchestrator | Tuesday 31 March 2026 04:27:07 +0000 (0:00:01.154) 0:03:45.907 ********* 2026-03-31 04:27:16.747164 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:27:16.747176 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:27:16.747186 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:27:16.747197 | orchestrator | 2026-03-31 04:27:16.747208 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-03-31 04:27:16.747219 | orchestrator | Tuesday 31 March 2026 04:27:08 +0000 (0:00:00.543) 0:03:46.450 ********* 2026-03-31 04:27:16.747230 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:27:16.747240 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:27:16.747251 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:27:16.747262 | orchestrator | 2026-03-31 04:27:16.747287 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-03-31 04:27:16.747299 | orchestrator | Tuesday 31 March 2026 04:27:09 +0000 (0:00:01.611) 0:03:48.061 ********* 2026-03-31 04:27:16.747310 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:27:16.747322 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:27:16.747335 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:27:16.747348 | orchestrator | 2026-03-31 04:27:16.747361 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-03-31 04:27:16.747373 | orchestrator | Tuesday 31 March 2026 04:27:10 +0000 (0:00:00.447) 0:03:48.509 ********* 2026-03-31 04:27:16.747386 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 04:27:16.747399 | orchestrator | 2026-03-31 04:27:16.747412 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-03-31 04:27:16.747424 | orchestrator | Tuesday 31 March 2026 04:27:12 +0000 (0:00:01.832) 0:03:50.341 ********* 2026-03-31 04:27:16.747457 | orchestrator | ok: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-31 04:27:16.747480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-31 04:27:16.747493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-31 04:27:16.747505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-31 04:27:16.747522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-31 04:27:16.747543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-31 04:27:16.856355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-31 04:27:16.856415 | orchestrator | ok: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-31 04:27:16.856422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-31 04:27:16.856428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-31 04:27:16.856440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-31 04:27:16.856452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-31 04:27:16.856471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-31 04:27:16.856477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-31 04:27:16.856481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-31 04:27:16.856485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-31 04:27:16.856490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-31 04:27:16.856498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-31 04:27:16.856505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-31 04:27:17.088447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-31 04:27:17.088531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-31 04:27:17.088593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-31 04:27:17.088608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-31 04:27:17.088617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-31 04:27:17.088640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-31 04:27:17.088662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-31 04:27:17.088669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-31 04:27:17.088676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-31 04:27:17.088688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-31 04:27:17.088695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-31 04:27:17.088707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-31 04:27:17.088720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-31 04:27:17.410178 | orchestrator | ok: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-31 04:27:17.410271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-31 04:27:17.410297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-31 04:27:17.410327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-31 04:27:17.410336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-31 04:27:17.410363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-31 04:27:17.410373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-31 04:27:17.410382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-31 04:27:17.410397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-31 04:27:17.410412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-31 04:27:17.410420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-31 04:27:17.410434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-31 04:27:19.104699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-31 04:27:19.104794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-31 04:27:19.104826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-31 04:27:19.104863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-31 04:27:19.104875 | orchestrator | 2026-03-31 04:27:19.104888 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-03-31 04:27:19.104899 | orchestrator | Tuesday 31 March 2026 04:27:17 +0000 (0:00:05.298) 0:03:55.639 ********* 2026-03-31 04:27:19.104910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-31 04:27:19.104939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-31 04:27:19.104952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-31 04:27:19.104975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-31 04:27:19.104986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-31 04:27:19.104996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-31 04:27:19.105015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-31 04:27:19.204408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-31 04:27:19.204517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-31 04:27:19.204551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-31 04:27:19.204563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-31 04:27:19.204573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-31 04:27:19.204582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-31 04:27:19.204607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-31 04:27:19.204627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-31 04:27:19.204636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-31 04:27:19.204645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-31 04:27:19.204653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-31 04:27:19.204662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-31 04:27:19.204676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-31 04:27:19.472966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-31 04:27:19.473019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-31 04:27:19.473026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-31 04:27:19.473082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-31 04:27:19.473090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-31 04:27:19.473104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-31 04:27:19.473196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-31 04:27:19.473205 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:27:19.473214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-31 04:27:19.473219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-31 04:27:19.473223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-31 04:27:19.473227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-31 04:27:19.473235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-31 04:27:19.779285 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:27:19.779412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-31 04:27:19.779434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-31 04:27:19.779448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-31 04:27:19.779461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-31 04:27:19.779473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-31 04:27:19.779720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-31 04:27:19.779745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-31 04:27:19.779765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-31 04:27:19.779788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-31 04:27:19.779807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-31 04:27:19.779826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-31 04:27:19.779871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-31 04:27:32.221739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-31 04:27:32.221852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-31 04:27:32.221870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-31 04:27:32.221889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-31 04:27:32.221903 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:27:32.221917 | orchestrator | 2026-03-31 04:27:32.221954 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-03-31 04:27:32.221967 | orchestrator | Tuesday 31 March 2026 04:27:19 +0000 (0:00:02.367) 0:03:58.007 ********* 2026-03-31 04:27:32.221980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-31 04:27:32.221994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-31 04:27:32.222007 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:27:32.222071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-31 04:27:32.222085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-31 04:27:32.222097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-31 04:27:32.222130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-31 04:27:32.222149 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:27:32.222161 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:27:32.222172 | orchestrator | 2026-03-31 04:27:32.222208 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-03-31 04:27:32.222221 | orchestrator | Tuesday 31 March 2026 04:27:21 +0000 (0:00:01.963) 0:03:59.970 ********* 2026-03-31 04:27:32.222235 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:27:32.222248 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:27:32.222261 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:27:32.222273 | orchestrator | 2026-03-31 04:27:32.222285 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-03-31 04:27:32.222298 | orchestrator | Tuesday 31 March 2026 04:27:23 +0000 (0:00:02.232) 0:04:02.203 ********* 2026-03-31 04:27:32.222310 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:27:32.222328 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:27:32.222357 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:27:32.222375 | orchestrator | 2026-03-31 04:27:32.222395 | orchestrator | TASK [include_role : placement] ************************************************ 2026-03-31 04:27:32.222414 | orchestrator | Tuesday 31 March 2026 04:27:26 +0000 (0:00:02.453) 0:04:04.656 ********* 2026-03-31 04:27:32.222434 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 04:27:32.222455 | orchestrator | 2026-03-31 04:27:32.222469 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-03-31 04:27:32.222482 | orchestrator | Tuesday 31 March 2026 04:27:27 +0000 (0:00:01.400) 0:04:06.057 ********* 2026-03-31 04:27:32.222496 | orchestrator | ok: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-31 04:27:32.222525 | orchestrator | ok: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-31 04:27:32.222588 | orchestrator | ok: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-31 04:27:32.222603 | orchestrator | 2026-03-31 04:27:32.222615 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-03-31 04:27:32.222635 | orchestrator | Tuesday 31 March 2026 04:27:32 +0000 (0:00:04.386) 0:04:10.444 ********* 2026-03-31 04:27:44.986394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-31 04:27:44.986469 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:27:44.986481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-31 04:27:44.986502 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:27:44.986511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-31 04:27:44.986517 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:27:44.986524 | orchestrator | 2026-03-31 04:27:44.986531 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-03-31 04:27:44.986539 | orchestrator | Tuesday 31 March 2026 04:27:32 +0000 (0:00:00.582) 0:04:11.026 ********* 2026-03-31 04:27:44.986547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-31 04:27:44.986555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-31 04:27:44.986564 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:27:44.986570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-31 04:27:44.986577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-31 04:27:44.986583 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:27:44.986602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-31 04:27:44.986609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-31 04:27:44.986616 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:27:44.986623 | orchestrator | 2026-03-31 04:27:44.986629 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-03-31 04:27:44.986636 | orchestrator | Tuesday 31 March 2026 04:27:33 +0000 (0:00:00.872) 0:04:11.899 ********* 2026-03-31 04:27:44.986642 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:27:44.986649 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:27:44.986656 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:27:44.986662 | orchestrator | 2026-03-31 04:27:44.986668 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-03-31 04:27:44.986674 | orchestrator | Tuesday 31 March 2026 04:27:35 +0000 (0:00:02.216) 0:04:14.115 ********* 2026-03-31 04:27:44.986681 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:27:44.986687 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:27:44.986694 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:27:44.986700 | orchestrator | 2026-03-31 04:27:44.986706 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-03-31 04:27:44.986718 | orchestrator | Tuesday 31 March 2026 04:27:38 +0000 (0:00:02.338) 0:04:16.454 ********* 2026-03-31 04:27:44.986725 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 04:27:44.986731 | orchestrator | 2026-03-31 04:27:44.986737 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-03-31 04:27:44.986743 | orchestrator | Tuesday 31 March 2026 04:27:39 +0000 (0:00:01.361) 0:04:17.815 ********* 2026-03-31 04:27:44.986751 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-31 04:27:44.986781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 04:27:44.986790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-31 04:27:44.986806 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-31 04:27:45.757743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 04:27:45.757818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-31 04:27:45.757826 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-31 04:27:45.757831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 04:27:45.757835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-31 04:27:45.757857 | orchestrator | 2026-03-31 04:27:45.757862 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-03-31 04:27:45.757867 | orchestrator | Tuesday 31 March 2026 04:27:44 +0000 (0:00:05.391) 0:04:23.207 ********* 2026-03-31 04:27:45.757891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-31 04:27:45.757897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 04:27:45.757901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-31 04:27:45.757905 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:27:45.757910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-31 04:27:45.757923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 04:27:59.427568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-31 04:27:59.427686 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:27:59.427725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-31 04:27:59.427755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-31 04:27:59.427768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-31 04:27:59.427780 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:27:59.427792 | orchestrator | 2026-03-31 04:27:59.427832 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-03-31 04:27:59.427846 | orchestrator | Tuesday 31 March 2026 04:27:45 +0000 (0:00:00.779) 0:04:23.986 ********* 2026-03-31 04:27:59.427859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-31 04:27:59.427873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-31 04:27:59.427900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-31 04:27:59.427932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-31 04:27:59.427947 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:27:59.427959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-31 04:27:59.427970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-31 04:27:59.427982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-31 04:27:59.427992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-31 04:27:59.428004 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:27:59.428015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-31 04:27:59.428026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-31 04:27:59.428040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-31 04:27:59.428057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-31 04:27:59.428075 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:27:59.428086 | orchestrator | 2026-03-31 04:27:59.428097 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-03-31 04:27:59.428108 | orchestrator | Tuesday 31 March 2026 04:27:46 +0000 (0:00:01.001) 0:04:24.988 ********* 2026-03-31 04:27:59.428120 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:27:59.428131 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:27:59.428142 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:27:59.428153 | orchestrator | 2026-03-31 04:27:59.428164 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-03-31 04:27:59.428174 | orchestrator | Tuesday 31 March 2026 04:27:48 +0000 (0:00:02.005) 0:04:26.994 ********* 2026-03-31 04:27:59.428195 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:27:59.428206 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:27:59.428217 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:27:59.428227 | orchestrator | 2026-03-31 04:27:59.428238 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-03-31 04:27:59.428249 | orchestrator | Tuesday 31 March 2026 04:27:51 +0000 (0:00:03.111) 0:04:30.106 ********* 2026-03-31 04:27:59.428260 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 04:27:59.428271 | orchestrator | 2026-03-31 04:27:59.428281 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-03-31 04:27:59.428292 | orchestrator | Tuesday 31 March 2026 04:27:53 +0000 (0:00:01.412) 0:04:31.519 ********* 2026-03-31 04:27:59.428322 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-03-31 04:27:59.428336 | orchestrator | 2026-03-31 04:27:59.428347 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-03-31 04:27:59.428357 | orchestrator | Tuesday 31 March 2026 04:27:54 +0000 (0:00:01.409) 0:04:32.929 ********* 2026-03-31 04:27:59.428370 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-31 04:27:59.428398 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-31 04:28:14.534442 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-31 04:28:14.534572 | orchestrator | 2026-03-31 04:28:14.534593 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-03-31 04:28:14.534607 | orchestrator | Tuesday 31 March 2026 04:27:59 +0000 (0:00:04.725) 0:04:37.654 ********* 2026-03-31 04:28:14.534621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-31 04:28:14.534635 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:28:14.534648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-31 04:28:14.534684 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:28:14.534696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-31 04:28:14.534708 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:28:14.534719 | orchestrator | 2026-03-31 04:28:14.534730 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-03-31 04:28:14.534741 | orchestrator | Tuesday 31 March 2026 04:28:01 +0000 (0:00:02.117) 0:04:39.771 ********* 2026-03-31 04:28:14.534754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-31 04:28:14.534769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-31 04:28:14.534782 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:28:14.534793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-31 04:28:14.534811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-31 04:28:14.534830 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:28:14.534867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-31 04:28:14.534887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-31 04:28:14.534929 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:28:14.534953 | orchestrator | 2026-03-31 04:28:14.534972 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-31 04:28:14.534994 | orchestrator | Tuesday 31 March 2026 04:28:03 +0000 (0:00:01.881) 0:04:41.652 ********* 2026-03-31 04:28:14.535007 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:28:14.535021 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:28:14.535033 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:28:14.535046 | orchestrator | 2026-03-31 04:28:14.535059 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-31 04:28:14.535071 | orchestrator | Tuesday 31 March 2026 04:28:06 +0000 (0:00:02.707) 0:04:44.360 ********* 2026-03-31 04:28:14.535084 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:28:14.535095 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:28:14.535106 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:28:14.535117 | orchestrator | 2026-03-31 04:28:14.535128 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-03-31 04:28:14.535139 | orchestrator | Tuesday 31 March 2026 04:28:09 +0000 (0:00:03.405) 0:04:47.766 ********* 2026-03-31 04:28:14.535150 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-03-31 04:28:14.535174 | orchestrator | 2026-03-31 04:28:14.535186 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-03-31 04:28:14.535197 | orchestrator | Tuesday 31 March 2026 04:28:11 +0000 (0:00:01.825) 0:04:49.592 ********* 2026-03-31 04:28:14.535209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-31 04:28:14.535222 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:28:14.535234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-31 04:28:14.535252 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:28:14.535270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-31 04:28:14.535290 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:28:14.535305 | orchestrator | 2026-03-31 04:28:14.535323 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-03-31 04:28:14.535335 | orchestrator | Tuesday 31 March 2026 04:28:12 +0000 (0:00:01.472) 0:04:51.064 ********* 2026-03-31 04:28:14.535347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-31 04:28:14.535358 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:28:14.535403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-31 04:28:14.535416 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:28:14.535438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-31 04:28:41.631942 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:28:41.632053 | orchestrator | 2026-03-31 04:28:41.632068 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-03-31 04:28:41.632080 | orchestrator | Tuesday 31 March 2026 04:28:14 +0000 (0:00:01.694) 0:04:52.759 ********* 2026-03-31 04:28:41.632092 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:28:41.632102 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:28:41.632112 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:28:41.632122 | orchestrator | 2026-03-31 04:28:41.632132 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-31 04:28:41.632142 | orchestrator | Tuesday 31 March 2026 04:28:16 +0000 (0:00:02.239) 0:04:54.999 ********* 2026-03-31 04:28:41.632152 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:28:41.632163 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:28:41.632173 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:28:41.632182 | orchestrator | 2026-03-31 04:28:41.632192 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-31 04:28:41.632202 | orchestrator | Tuesday 31 March 2026 04:28:19 +0000 (0:00:02.736) 0:04:57.736 ********* 2026-03-31 04:28:41.632212 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:28:41.632222 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:28:41.632231 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:28:41.632241 | orchestrator | 2026-03-31 04:28:41.632251 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-03-31 04:28:41.632261 | orchestrator | Tuesday 31 March 2026 04:28:23 +0000 (0:00:03.582) 0:05:01.318 ********* 2026-03-31 04:28:41.632271 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-03-31 04:28:41.632282 | orchestrator | 2026-03-31 04:28:41.632292 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-03-31 04:28:41.632302 | orchestrator | Tuesday 31 March 2026 04:28:24 +0000 (0:00:01.029) 0:05:02.348 ********* 2026-03-31 04:28:41.632313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-31 04:28:41.632326 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:28:41.632337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-31 04:28:41.632347 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:28:41.632357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-31 04:28:41.632389 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:28:41.632400 | orchestrator | 2026-03-31 04:28:41.632410 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-03-31 04:28:41.632421 | orchestrator | Tuesday 31 March 2026 04:28:25 +0000 (0:00:01.708) 0:05:04.056 ********* 2026-03-31 04:28:41.632445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-31 04:28:41.632456 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:28:41.632558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-31 04:28:41.632581 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:28:41.632599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-31 04:28:41.632617 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:28:41.632634 | orchestrator | 2026-03-31 04:28:41.632652 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-03-31 04:28:41.632670 | orchestrator | Tuesday 31 March 2026 04:28:27 +0000 (0:00:01.581) 0:05:05.638 ********* 2026-03-31 04:28:41.632687 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:28:41.632705 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:28:41.632721 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:28:41.632737 | orchestrator | 2026-03-31 04:28:41.632754 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-31 04:28:41.632772 | orchestrator | Tuesday 31 March 2026 04:28:29 +0000 (0:00:02.216) 0:05:07.854 ********* 2026-03-31 04:28:41.632790 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:28:41.632806 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:28:41.632821 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:28:41.632837 | orchestrator | 2026-03-31 04:28:41.632853 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-31 04:28:41.632871 | orchestrator | Tuesday 31 March 2026 04:28:32 +0000 (0:00:02.683) 0:05:10.537 ********* 2026-03-31 04:28:41.632888 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:28:41.632902 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:28:41.632912 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:28:41.632922 | orchestrator | 2026-03-31 04:28:41.632932 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-03-31 04:28:41.632942 | orchestrator | Tuesday 31 March 2026 04:28:35 +0000 (0:00:03.579) 0:05:14.116 ********* 2026-03-31 04:28:41.632951 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 04:28:41.632961 | orchestrator | 2026-03-31 04:28:41.632971 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-03-31 04:28:41.632992 | orchestrator | Tuesday 31 March 2026 04:28:37 +0000 (0:00:01.727) 0:05:15.843 ********* 2026-03-31 04:28:41.633005 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-31 04:28:41.633024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-31 04:28:41.633045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-31 04:28:42.424384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-31 04:28:42.424544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-31 04:28:42.424562 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-31 04:28:42.424600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-31 04:28:42.424627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-31 04:28:42.424659 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-31 04:28:42.424672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-31 04:28:42.424684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-31 04:28:42.424695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-31 04:28:42.424715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-31 04:28:42.424732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-31 04:28:42.424744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-31 04:28:42.424757 | orchestrator | 2026-03-31 04:28:42.424771 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-03-31 04:28:42.424783 | orchestrator | Tuesday 31 March 2026 04:28:41 +0000 (0:00:04.158) 0:05:20.001 ********* 2026-03-31 04:28:42.424807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-31 04:28:42.584623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-31 04:28:42.584713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-31 04:28:42.584722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-31 04:28:42.584738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-31 04:28:42.584745 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:28:42.584752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-31 04:28:42.584757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-31 04:28:42.584776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-31 04:28:42.584787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-31 04:28:42.584792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-31 04:28:42.584797 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:28:42.584805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-31 04:28:42.584810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-31 04:28:42.584815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-31 04:28:42.584825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-31 04:28:55.619413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-31 04:28:55.619594 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:28:55.619617 | orchestrator | 2026-03-31 04:28:55.619631 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-03-31 04:28:55.619644 | orchestrator | Tuesday 31 March 2026 04:28:42 +0000 (0:00:00.816) 0:05:20.818 ********* 2026-03-31 04:28:55.619656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-31 04:28:55.619671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-31 04:28:55.619684 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:28:55.619696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-31 04:28:55.619707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-31 04:28:55.619718 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:28:55.619729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-31 04:28:55.619757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-31 04:28:55.619769 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:28:55.619780 | orchestrator | 2026-03-31 04:28:55.619792 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-03-31 04:28:55.619803 | orchestrator | Tuesday 31 March 2026 04:28:44 +0000 (0:00:01.431) 0:05:22.250 ********* 2026-03-31 04:28:55.619814 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:28:55.619826 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:28:55.619837 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:28:55.619848 | orchestrator | 2026-03-31 04:28:55.619859 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-03-31 04:28:55.619870 | orchestrator | Tuesday 31 March 2026 04:28:45 +0000 (0:00:01.478) 0:05:23.729 ********* 2026-03-31 04:28:55.619881 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:28:55.619892 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:28:55.619903 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:28:55.619914 | orchestrator | 2026-03-31 04:28:55.619925 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-03-31 04:28:55.619936 | orchestrator | Tuesday 31 March 2026 04:28:47 +0000 (0:00:02.337) 0:05:26.067 ********* 2026-03-31 04:28:55.619970 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 04:28:55.619985 | orchestrator | 2026-03-31 04:28:55.619998 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-03-31 04:28:55.620010 | orchestrator | Tuesday 31 March 2026 04:28:49 +0000 (0:00:01.798) 0:05:27.865 ********* 2026-03-31 04:28:55.620025 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-31 04:28:55.620062 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-31 04:28:55.620080 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-31 04:28:55.620110 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-31 04:28:55.620135 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-31 04:28:55.620180 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-31 04:28:58.201945 | orchestrator | 2026-03-31 04:28:58.202111 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-03-31 04:28:58.202130 | orchestrator | Tuesday 31 March 2026 04:28:55 +0000 (0:00:05.979) 0:05:33.844 ********* 2026-03-31 04:28:58.202145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-31 04:28:58.202180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-31 04:28:58.202218 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:28:58.202232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-31 04:28:58.202246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-31 04:28:58.202277 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:28:58.202290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-31 04:28:58.202308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-31 04:28:58.202328 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:28:58.202339 | orchestrator | 2026-03-31 04:28:58.202351 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-03-31 04:28:58.202363 | orchestrator | Tuesday 31 March 2026 04:28:56 +0000 (0:00:00.788) 0:05:34.632 ********* 2026-03-31 04:28:58.202375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-31 04:28:58.202389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-31 04:28:58.202403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-31 04:28:58.202415 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:28:58.202427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-31 04:28:58.202438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-31 04:28:58.202452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-31 04:28:58.202465 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:28:58.202478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-31 04:28:58.202491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-31 04:28:58.202518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-31 04:29:04.665188 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:29:04.665303 | orchestrator | 2026-03-31 04:29:04.665321 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-03-31 04:29:04.665335 | orchestrator | Tuesday 31 March 2026 04:28:58 +0000 (0:00:01.791) 0:05:36.424 ********* 2026-03-31 04:29:04.665347 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:29:04.665358 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:29:04.665370 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:29:04.665381 | orchestrator | 2026-03-31 04:29:04.665393 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-03-31 04:29:04.665404 | orchestrator | Tuesday 31 March 2026 04:28:58 +0000 (0:00:00.508) 0:05:36.933 ********* 2026-03-31 04:29:04.665415 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:29:04.665426 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:29:04.665437 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:29:04.665474 | orchestrator | 2026-03-31 04:29:04.665486 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-03-31 04:29:04.665497 | orchestrator | Tuesday 31 March 2026 04:29:00 +0000 (0:00:01.542) 0:05:38.475 ********* 2026-03-31 04:29:04.665508 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 04:29:04.665520 | orchestrator | 2026-03-31 04:29:04.665531 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-03-31 04:29:04.665542 | orchestrator | Tuesday 31 March 2026 04:29:02 +0000 (0:00:01.854) 0:05:40.329 ********* 2026-03-31 04:29:04.665650 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-31 04:29:04.665672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-31 04:29:04.665686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:29:04.665699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:29:04.665712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-31 04:29:04.665745 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-31 04:29:04.665778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-31 04:29:04.665792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:29:04.665806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:29:04.665819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-31 04:29:04.665833 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-31 04:29:04.665846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-31 04:29:04.665868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:29:07.332435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:29:07.332568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-31 04:29:07.332660 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-31 04:29:07.332680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-31 04:29:07.332693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:29:07.332733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:29:07.332780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-31 04:29:07.332801 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-31 04:29:07.332815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-31 04:29:07.332826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:29:07.332838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:29:07.332862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-31 04:29:07.332889 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-31 04:29:08.103323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-31 04:29:08.103457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:29:08.103486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:29:08.103510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-31 04:29:08.103561 | orchestrator | 2026-03-31 04:29:08.103654 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-03-31 04:29:08.103679 | orchestrator | Tuesday 31 March 2026 04:29:07 +0000 (0:00:05.411) 0:05:45.740 ********* 2026-03-31 04:29:08.103700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-31 04:29:08.103719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-31 04:29:08.103771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:29:08.103785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:29:08.103797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-31 04:29:08.103814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-31 04:29:08.103839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-31 04:29:08.103875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-31 04:29:08.267775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-31 04:29:08.267878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:29:08.267895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:29:08.267908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:29:08.267950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:29:08.267963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-31 04:29:08.267975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-31 04:29:08.267988 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:29:08.268037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-31 04:29:08.268055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-31 04:29:08.268067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:29:08.268087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:29:08.268099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-31 04:29:08.268111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-31 04:29:08.268123 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:29:08.268147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-31 04:29:10.041449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:29:10.041556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:29:10.041568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-31 04:29:10.041640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-31 04:29:10.041652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-31 04:29:10.041670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:29:10.041691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-31 04:29:10.041696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-31 04:29:10.041702 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:29:10.041708 | orchestrator | 2026-03-31 04:29:10.041715 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-03-31 04:29:10.041728 | orchestrator | Tuesday 31 March 2026 04:29:08 +0000 (0:00:00.908) 0:05:46.649 ********* 2026-03-31 04:29:10.041735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-31 04:29:10.041753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-31 04:29:10.041771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-31 04:29:10.041782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-31 04:29:10.041791 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:29:10.041800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-31 04:29:10.041808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-31 04:29:10.041815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-31 04:29:10.041823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-31 04:29:10.041831 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:29:10.041839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-31 04:29:10.041853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-31 04:29:10.041862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-31 04:29:10.041878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-31 04:29:19.112350 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:29:19.112452 | orchestrator | 2026-03-31 04:29:19.112467 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-03-31 04:29:19.112481 | orchestrator | Tuesday 31 March 2026 04:29:09 +0000 (0:00:01.572) 0:05:48.222 ********* 2026-03-31 04:29:19.112491 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:29:19.112522 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:29:19.112533 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:29:19.112543 | orchestrator | 2026-03-31 04:29:19.112553 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-03-31 04:29:19.112563 | orchestrator | Tuesday 31 March 2026 04:29:10 +0000 (0:00:00.557) 0:05:48.779 ********* 2026-03-31 04:29:19.112573 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:29:19.112582 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:29:19.112592 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:29:19.112602 | orchestrator | 2026-03-31 04:29:19.112612 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-03-31 04:29:19.112681 | orchestrator | Tuesday 31 March 2026 04:29:12 +0000 (0:00:01.588) 0:05:50.368 ********* 2026-03-31 04:29:19.112692 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 04:29:19.112701 | orchestrator | 2026-03-31 04:29:19.112711 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-03-31 04:29:19.112721 | orchestrator | Tuesday 31 March 2026 04:29:14 +0000 (0:00:01.998) 0:05:52.366 ********* 2026-03-31 04:29:19.112734 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-31 04:29:19.112753 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-31 04:29:19.112779 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-31 04:29:19.112798 | orchestrator | 2026-03-31 04:29:19.112808 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-03-31 04:29:19.112835 | orchestrator | Tuesday 31 March 2026 04:29:16 +0000 (0:00:02.783) 0:05:55.150 ********* 2026-03-31 04:29:19.112847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-31 04:29:19.112858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-31 04:29:19.112868 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:29:19.112878 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:29:19.112890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-31 04:29:19.112902 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:29:19.112913 | orchestrator | 2026-03-31 04:29:19.112924 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-03-31 04:29:19.112936 | orchestrator | Tuesday 31 March 2026 04:29:17 +0000 (0:00:00.481) 0:05:55.631 ********* 2026-03-31 04:29:19.112948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-31 04:29:19.112972 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:29:19.112984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-31 04:29:19.112996 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:29:19.113007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-31 04:29:19.113018 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:29:19.113029 | orchestrator | 2026-03-31 04:29:19.113039 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-03-31 04:29:19.113051 | orchestrator | Tuesday 31 March 2026 04:29:18 +0000 (0:00:01.216) 0:05:56.847 ********* 2026-03-31 04:29:19.113068 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:29:31.431262 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:29:31.431385 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:29:31.431402 | orchestrator | 2026-03-31 04:29:31.431416 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-03-31 04:29:31.431429 | orchestrator | Tuesday 31 March 2026 04:29:19 +0000 (0:00:00.497) 0:05:57.345 ********* 2026-03-31 04:29:31.431440 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:29:31.431452 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:29:31.431463 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:29:31.431474 | orchestrator | 2026-03-31 04:29:31.431486 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-03-31 04:29:31.431497 | orchestrator | Tuesday 31 March 2026 04:29:21 +0000 (0:00:02.130) 0:05:59.476 ********* 2026-03-31 04:29:31.431509 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 04:29:31.431520 | orchestrator | 2026-03-31 04:29:31.431532 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-03-31 04:29:31.431543 | orchestrator | Tuesday 31 March 2026 04:29:23 +0000 (0:00:01.981) 0:06:01.457 ********* 2026-03-31 04:29:31.431557 | orchestrator | ok: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-31 04:29:31.431576 | orchestrator | ok: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-31 04:29:31.431628 | orchestrator | ok: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-31 04:29:31.431661 | orchestrator | ok: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-31 04:29:31.431772 | orchestrator | ok: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-31 04:29:31.431786 | orchestrator | ok: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-31 04:29:31.431799 | orchestrator | 2026-03-31 04:29:31.431813 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-03-31 04:29:31.431826 | orchestrator | Tuesday 31 March 2026 04:29:30 +0000 (0:00:07.022) 0:06:08.480 ********* 2026-03-31 04:29:31.431848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-31 04:29:31.431878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-31 04:29:37.747992 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:29:37.748104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-31 04:29:37.748124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-31 04:29:37.748139 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:29:37.748151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-31 04:29:37.748202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-31 04:29:37.748215 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:29:37.748227 | orchestrator | 2026-03-31 04:29:37.748243 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-03-31 04:29:37.748263 | orchestrator | Tuesday 31 March 2026 04:29:31 +0000 (0:00:01.176) 0:06:09.656 ********* 2026-03-31 04:29:37.748305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-31 04:29:37.748326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-31 04:29:37.748347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-31 04:29:37.748364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-31 04:29:37.748381 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:29:37.748399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-31 04:29:37.748417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-31 04:29:37.748435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-31 04:29:37.748454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-31 04:29:37.748487 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:29:37.748506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-31 04:29:37.748525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-31 04:29:37.748545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-31 04:29:37.748565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-31 04:29:37.748584 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:29:37.748604 | orchestrator | 2026-03-31 04:29:37.748623 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-03-31 04:29:37.748642 | orchestrator | Tuesday 31 March 2026 04:29:32 +0000 (0:00:01.069) 0:06:10.725 ********* 2026-03-31 04:29:37.748662 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:29:37.748682 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:29:37.748733 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:29:37.748751 | orchestrator | 2026-03-31 04:29:37.748770 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-03-31 04:29:37.748790 | orchestrator | Tuesday 31 March 2026 04:29:33 +0000 (0:00:01.355) 0:06:12.080 ********* 2026-03-31 04:29:37.748810 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:29:37.748841 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:29:37.748861 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:29:37.748879 | orchestrator | 2026-03-31 04:29:37.748898 | orchestrator | TASK [include_role : swift] **************************************************** 2026-03-31 04:29:37.748917 | orchestrator | Tuesday 31 March 2026 04:29:36 +0000 (0:00:02.828) 0:06:14.909 ********* 2026-03-31 04:29:37.748936 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:29:37.748955 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:29:37.748974 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:29:37.748987 | orchestrator | 2026-03-31 04:29:37.748998 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-03-31 04:29:37.749009 | orchestrator | Tuesday 31 March 2026 04:29:37 +0000 (0:00:00.362) 0:06:15.271 ********* 2026-03-31 04:29:37.749020 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:29:37.749031 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:29:37.749042 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:29:37.749055 | orchestrator | 2026-03-31 04:29:37.749073 | orchestrator | TASK [include_role : trove] **************************************************** 2026-03-31 04:29:37.749092 | orchestrator | Tuesday 31 March 2026 04:29:37 +0000 (0:00:00.360) 0:06:15.632 ********* 2026-03-31 04:29:37.749110 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:29:37.749129 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:29:37.749164 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:29:40.202989 | orchestrator | 2026-03-31 04:29:40.203093 | orchestrator | TASK [include_role : venus] **************************************************** 2026-03-31 04:29:40.203111 | orchestrator | Tuesday 31 March 2026 04:29:37 +0000 (0:00:00.345) 0:06:15.978 ********* 2026-03-31 04:29:40.203123 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:29:40.203135 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:29:40.203147 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:29:40.203158 | orchestrator | 2026-03-31 04:29:40.203169 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-03-31 04:29:40.203206 | orchestrator | Tuesday 31 March 2026 04:29:38 +0000 (0:00:00.762) 0:06:16.741 ********* 2026-03-31 04:29:40.203219 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:29:40.203230 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:29:40.203241 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:29:40.203252 | orchestrator | 2026-03-31 04:29:40.203263 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-03-31 04:29:40.203274 | orchestrator | Tuesday 31 March 2026 04:29:38 +0000 (0:00:00.371) 0:06:17.113 ********* 2026-03-31 04:29:40.203285 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:29:40.203296 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:29:40.203307 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:29:40.203318 | orchestrator | 2026-03-31 04:29:40.203329 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 04:29:40.203341 | orchestrator | testbed-node-0 : ok=113  changed=1  unreachable=0 failed=0 skipped=91  rescued=0 ignored=0 2026-03-31 04:29:40.203354 | orchestrator | testbed-node-1 : ok=112  changed=0 unreachable=0 failed=0 skipped=91  rescued=0 ignored=0 2026-03-31 04:29:40.203366 | orchestrator | testbed-node-2 : ok=112  changed=0 unreachable=0 failed=0 skipped=91  rescued=0 ignored=0 2026-03-31 04:29:40.203377 | orchestrator | 2026-03-31 04:29:40.203388 | orchestrator | 2026-03-31 04:29:40.203400 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 04:29:40.203411 | orchestrator | Tuesday 31 March 2026 04:29:39 +0000 (0:00:00.264) 0:06:17.378 ********* 2026-03-31 04:29:40.203422 | orchestrator | =============================================================================== 2026-03-31 04:29:40.203433 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 7.02s 2026-03-31 04:29:40.203444 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.98s 2026-03-31 04:29:40.203455 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.41s 2026-03-31 04:29:40.203466 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 5.39s 2026-03-31 04:29:40.203477 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.30s 2026-03-31 04:29:40.203488 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.11s 2026-03-31 04:29:40.203499 | orchestrator | loadbalancer : Check loadbalancer containers ---------------------------- 4.74s 2026-03-31 04:29:40.203509 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.73s 2026-03-31 04:29:40.203520 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.57s 2026-03-31 04:29:40.203532 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.57s 2026-03-31 04:29:40.203545 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 4.44s 2026-03-31 04:29:40.203558 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 4.39s 2026-03-31 04:29:40.203571 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.38s 2026-03-31 04:29:40.203583 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.24s 2026-03-31 04:29:40.203596 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 4.16s 2026-03-31 04:29:40.203608 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.15s 2026-03-31 04:29:40.203620 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.14s 2026-03-31 04:29:40.203633 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 4.09s 2026-03-31 04:29:40.203645 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 4.05s 2026-03-31 04:29:40.203673 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.03s 2026-03-31 04:29:40.650307 | orchestrator | + osism apply -a upgrade opensearch 2026-03-31 04:29:42.702861 | orchestrator | 2026-03-31 04:29:42 | INFO  | Task 3c7de289-396e-4377-80ac-1eb7cd7f22e1 (opensearch) was prepared for execution. 2026-03-31 04:29:42.702967 | orchestrator | 2026-03-31 04:29:42 | INFO  | It takes a moment until task 3c7de289-396e-4377-80ac-1eb7cd7f22e1 (opensearch) has been started and output is visible here. 2026-03-31 04:29:53.924644 | orchestrator | 2026-03-31 04:29:53.924738 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-31 04:29:53.924768 | orchestrator | 2026-03-31 04:29:53.924775 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-31 04:29:53.924783 | orchestrator | Tuesday 31 March 2026 04:29:46 +0000 (0:00:00.274) 0:00:00.274 ********* 2026-03-31 04:29:53.924789 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:29:53.924797 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:29:53.924804 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:29:53.924810 | orchestrator | 2026-03-31 04:29:53.924817 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-31 04:29:53.924823 | orchestrator | Tuesday 31 March 2026 04:29:47 +0000 (0:00:00.352) 0:00:00.627 ********* 2026-03-31 04:29:53.924831 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-31 04:29:53.924838 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-03-31 04:29:53.924844 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-03-31 04:29:53.924850 | orchestrator | 2026-03-31 04:29:53.924857 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-31 04:29:53.924863 | orchestrator | 2026-03-31 04:29:53.924869 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-31 04:29:53.924876 | orchestrator | Tuesday 31 March 2026 04:29:47 +0000 (0:00:00.494) 0:00:01.122 ********* 2026-03-31 04:29:53.924883 | orchestrator | included: /ansible/roles/opensearch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 04:29:53.924890 | orchestrator | 2026-03-31 04:29:53.924896 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-03-31 04:29:53.924903 | orchestrator | Tuesday 31 March 2026 04:29:48 +0000 (0:00:00.554) 0:00:01.676 ********* 2026-03-31 04:29:53.924909 | orchestrator | ok: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-31 04:29:53.924916 | orchestrator | ok: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-31 04:29:53.924922 | orchestrator | ok: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-31 04:29:53.924928 | orchestrator | 2026-03-31 04:29:53.924934 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-03-31 04:29:53.924941 | orchestrator | Tuesday 31 March 2026 04:29:48 +0000 (0:00:00.662) 0:00:02.339 ********* 2026-03-31 04:29:53.924950 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-31 04:29:53.924960 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-31 04:29:53.925011 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-31 04:29:53.925022 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-31 04:29:53.925031 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-31 04:29:53.925039 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-31 04:29:53.925051 | orchestrator | 2026-03-31 04:29:53.925058 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-31 04:29:53.925064 | orchestrator | Tuesday 31 March 2026 04:29:50 +0000 (0:00:01.612) 0:00:03.951 ********* 2026-03-31 04:29:53.925071 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 04:29:53.925077 | orchestrator | 2026-03-31 04:29:53.925083 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-03-31 04:29:53.925090 | orchestrator | Tuesday 31 March 2026 04:29:51 +0000 (0:00:00.594) 0:00:04.545 ********* 2026-03-31 04:29:53.925103 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-31 04:29:54.676464 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-31 04:29:54.676566 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-31 04:29:54.676635 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-31 04:29:54.676680 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-31 04:29:54.676741 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-31 04:29:54.676795 | orchestrator | 2026-03-31 04:29:54.676810 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-03-31 04:29:54.676823 | orchestrator | Tuesday 31 March 2026 04:29:53 +0000 (0:00:02.759) 0:00:07.304 ********* 2026-03-31 04:29:54.676836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-31 04:29:54.676859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-31 04:29:54.676873 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:29:54.676891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-31 04:29:54.676914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-31 04:29:55.606326 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:29:55.606423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-31 04:29:55.606465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-31 04:29:55.606479 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:29:55.606490 | orchestrator | 2026-03-31 04:29:55.606501 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-03-31 04:29:55.606512 | orchestrator | Tuesday 31 March 2026 04:29:54 +0000 (0:00:00.756) 0:00:08.061 ********* 2026-03-31 04:29:55.606536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-31 04:29:55.606548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-31 04:29:55.606575 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:29:55.606587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-31 04:29:55.606604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-31 04:29:55.606615 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:29:55.606631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-31 04:29:55.606642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-31 04:29:55.606653 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:29:55.606663 | orchestrator | 2026-03-31 04:29:55.606673 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-03-31 04:29:55.606689 | orchestrator | Tuesday 31 March 2026 04:29:55 +0000 (0:00:00.925) 0:00:08.987 ********* 2026-03-31 04:30:05.426358 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-31 04:30:05.426484 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-31 04:30:05.426512 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-31 04:30:05.426525 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-31 04:30:05.426554 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-31 04:30:05.426574 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-31 04:30:05.426585 | orchestrator | 2026-03-31 04:30:05.426596 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-03-31 04:30:05.426606 | orchestrator | Tuesday 31 March 2026 04:29:58 +0000 (0:00:02.691) 0:00:11.679 ********* 2026-03-31 04:30:05.426615 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:30:05.426625 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:30:05.426634 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:30:05.426642 | orchestrator | 2026-03-31 04:30:05.426651 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-03-31 04:30:05.426660 | orchestrator | Tuesday 31 March 2026 04:30:00 +0000 (0:00:02.512) 0:00:14.191 ********* 2026-03-31 04:30:05.426669 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:30:05.426678 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:30:05.426686 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:30:05.426695 | orchestrator | 2026-03-31 04:30:05.426704 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-03-31 04:30:05.426713 | orchestrator | Tuesday 31 March 2026 04:30:02 +0000 (0:00:01.616) 0:00:15.808 ********* 2026-03-31 04:30:05.426726 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-31 04:30:05.426736 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-31 04:30:05.426758 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-31 04:30:12.064473 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-31 04:30:12.064582 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-31 04:30:12.064600 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-31 04:30:12.064633 | orchestrator | 2026-03-31 04:30:12.064648 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-31 04:30:12.064660 | orchestrator | Tuesday 31 March 2026 04:30:05 +0000 (0:00:03.001) 0:00:18.810 ********* 2026-03-31 04:30:12.064672 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:30:12.064683 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:30:12.064694 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:30:12.064705 | orchestrator | 2026-03-31 04:30:12.064716 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-31 04:30:12.064728 | orchestrator | Tuesday 31 March 2026 04:30:05 +0000 (0:00:00.515) 0:00:19.326 ********* 2026-03-31 04:30:12.064738 | orchestrator | 2026-03-31 04:30:12.064749 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-31 04:30:12.064760 | orchestrator | Tuesday 31 March 2026 04:30:05 +0000 (0:00:00.067) 0:00:19.393 ********* 2026-03-31 04:30:12.064771 | orchestrator | 2026-03-31 04:30:12.064782 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-31 04:30:12.064793 | orchestrator | Tuesday 31 March 2026 04:30:06 +0000 (0:00:00.072) 0:00:19.465 ********* 2026-03-31 04:30:12.064862 | orchestrator | 2026-03-31 04:30:12.064877 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-31 04:30:12.064905 | orchestrator | Tuesday 31 March 2026 04:30:06 +0000 (0:00:00.088) 0:00:19.554 ********* 2026-03-31 04:30:12.064917 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 04:30:12.064929 | orchestrator | 2026-03-31 04:30:12.064940 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-03-31 04:30:12.064951 | orchestrator | Tuesday 31 March 2026 04:30:06 +0000 (0:00:00.673) 0:00:20.228 ********* 2026-03-31 04:30:12.064961 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:30:12.064973 | orchestrator | 2026-03-31 04:30:12.064984 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-03-31 04:30:12.064995 | orchestrator | Tuesday 31 March 2026 04:30:09 +0000 (0:00:02.197) 0:00:22.425 ********* 2026-03-31 04:30:12.065005 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:30:12.065016 | orchestrator | 2026-03-31 04:30:12.065030 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-03-31 04:30:12.065042 | orchestrator | Tuesday 31 March 2026 04:30:11 +0000 (0:00:02.153) 0:00:24.579 ********* 2026-03-31 04:30:12.065055 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:30:12.065068 | orchestrator | 2026-03-31 04:30:12.065080 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-03-31 04:30:12.065093 | orchestrator | Tuesday 31 March 2026 04:30:11 +0000 (0:00:00.204) 0:00:24.784 ********* 2026-03-31 04:30:12.065106 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:30:12.065120 | orchestrator | 2026-03-31 04:30:12.065132 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 04:30:12.065145 | orchestrator | testbed-node-0 : ok=14  changed=0 unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-31 04:30:12.065160 | orchestrator | testbed-node-1 : ok=12  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-31 04:30:12.065178 | orchestrator | testbed-node-2 : ok=12  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-31 04:30:12.065198 | orchestrator | 2026-03-31 04:30:12.065218 | orchestrator | 2026-03-31 04:30:12.065238 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 04:30:12.065272 | orchestrator | Tuesday 31 March 2026 04:30:11 +0000 (0:00:00.381) 0:00:25.165 ********* 2026-03-31 04:30:12.065293 | orchestrator | =============================================================================== 2026-03-31 04:30:12.065324 | orchestrator | opensearch : Check opensearch containers -------------------------------- 3.00s 2026-03-31 04:30:12.065345 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.76s 2026-03-31 04:30:12.065359 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.69s 2026-03-31 04:30:12.065372 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.51s 2026-03-31 04:30:12.065383 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.20s 2026-03-31 04:30:12.065394 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.15s 2026-03-31 04:30:12.065405 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.62s 2026-03-31 04:30:12.065415 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.61s 2026-03-31 04:30:12.065426 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.93s 2026-03-31 04:30:12.065437 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.76s 2026-03-31 04:30:12.065448 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.67s 2026-03-31 04:30:12.065458 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.66s 2026-03-31 04:30:12.065469 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.59s 2026-03-31 04:30:12.065480 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.55s 2026-03-31 04:30:12.065491 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.52s 2026-03-31 04:30:12.065502 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.49s 2026-03-31 04:30:12.065513 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 0.38s 2026-03-31 04:30:12.065524 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2026-03-31 04:30:12.065535 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.23s 2026-03-31 04:30:12.065546 | orchestrator | opensearch : Create new log retention policy ---------------------------- 0.20s 2026-03-31 04:30:12.309523 | orchestrator | + osism apply -a upgrade memcached 2026-03-31 04:30:14.130150 | orchestrator | 2026-03-31 04:30:14 | INFO  | Task 28d50cac-ee15-444d-b9c1-33db6bf7ca9f (memcached) was prepared for execution. 2026-03-31 04:30:14.130277 | orchestrator | 2026-03-31 04:30:14 | INFO  | It takes a moment until task 28d50cac-ee15-444d-b9c1-33db6bf7ca9f (memcached) has been started and output is visible here. 2026-03-31 04:30:24.405759 | orchestrator | 2026-03-31 04:30:24.405952 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-31 04:30:24.405974 | orchestrator | 2026-03-31 04:30:24.405986 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-31 04:30:24.405998 | orchestrator | Tuesday 31 March 2026 04:30:18 +0000 (0:00:00.261) 0:00:00.261 ********* 2026-03-31 04:30:24.406009 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:30:24.406074 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:30:24.406087 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:30:24.406098 | orchestrator | 2026-03-31 04:30:24.406109 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-31 04:30:24.406121 | orchestrator | Tuesday 31 March 2026 04:30:18 +0000 (0:00:00.331) 0:00:00.592 ********* 2026-03-31 04:30:24.406132 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-03-31 04:30:24.406144 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-03-31 04:30:24.406155 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-03-31 04:30:24.406166 | orchestrator | 2026-03-31 04:30:24.406177 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-03-31 04:30:24.406214 | orchestrator | 2026-03-31 04:30:24.406226 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-03-31 04:30:24.406238 | orchestrator | Tuesday 31 March 2026 04:30:19 +0000 (0:00:00.481) 0:00:01.073 ********* 2026-03-31 04:30:24.406250 | orchestrator | included: /ansible/roles/memcached/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 04:30:24.406262 | orchestrator | 2026-03-31 04:30:24.406273 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-03-31 04:30:24.406284 | orchestrator | Tuesday 31 March 2026 04:30:19 +0000 (0:00:00.591) 0:00:01.664 ********* 2026-03-31 04:30:24.406295 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-03-31 04:30:24.406309 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-03-31 04:30:24.406321 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-03-31 04:30:24.406334 | orchestrator | 2026-03-31 04:30:24.406346 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-03-31 04:30:24.406359 | orchestrator | Tuesday 31 March 2026 04:30:20 +0000 (0:00:00.779) 0:00:02.444 ********* 2026-03-31 04:30:24.406372 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-03-31 04:30:24.406384 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-03-31 04:30:24.406396 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-03-31 04:30:24.406409 | orchestrator | 2026-03-31 04:30:24.406421 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-03-31 04:30:24.406434 | orchestrator | Tuesday 31 March 2026 04:30:22 +0000 (0:00:01.615) 0:00:04.060 ********* 2026-03-31 04:30:24.406446 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:30:24.406458 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:30:24.406471 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:30:24.406484 | orchestrator | 2026-03-31 04:30:24.406496 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 04:30:24.406523 | orchestrator | testbed-node-0 : ok=6  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 04:30:24.406543 | orchestrator | testbed-node-1 : ok=6  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 04:30:24.406557 | orchestrator | testbed-node-2 : ok=6  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 04:30:24.406569 | orchestrator | 2026-03-31 04:30:24.406582 | orchestrator | 2026-03-31 04:30:24.406594 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 04:30:24.406606 | orchestrator | Tuesday 31 March 2026 04:30:23 +0000 (0:00:01.747) 0:00:05.807 ********* 2026-03-31 04:30:24.406619 | orchestrator | =============================================================================== 2026-03-31 04:30:24.406632 | orchestrator | memcached : Check memcached container ----------------------------------- 1.75s 2026-03-31 04:30:24.406645 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.62s 2026-03-31 04:30:24.406657 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.78s 2026-03-31 04:30:24.406668 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.59s 2026-03-31 04:30:24.406680 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.48s 2026-03-31 04:30:24.406691 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2026-03-31 04:30:24.749203 | orchestrator | + osism apply -a upgrade redis 2026-03-31 04:30:26.809715 | orchestrator | 2026-03-31 04:30:26 | INFO  | Task fc670d17-0387-48f1-81cc-d063062b2a14 (redis) was prepared for execution. 2026-03-31 04:30:26.809801 | orchestrator | 2026-03-31 04:30:26 | INFO  | It takes a moment until task fc670d17-0387-48f1-81cc-d063062b2a14 (redis) has been started and output is visible here. 2026-03-31 04:30:36.350846 | orchestrator | 2026-03-31 04:30:36.350978 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-31 04:30:36.350992 | orchestrator | 2026-03-31 04:30:36.351020 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-31 04:30:36.351029 | orchestrator | Tuesday 31 March 2026 04:30:31 +0000 (0:00:00.281) 0:00:00.281 ********* 2026-03-31 04:30:36.351037 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:30:36.351047 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:30:36.351055 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:30:36.351063 | orchestrator | 2026-03-31 04:30:36.351071 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-31 04:30:36.351080 | orchestrator | Tuesday 31 March 2026 04:30:31 +0000 (0:00:00.352) 0:00:00.633 ********* 2026-03-31 04:30:36.351088 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-03-31 04:30:36.351097 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-03-31 04:30:36.351105 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-03-31 04:30:36.351113 | orchestrator | 2026-03-31 04:30:36.351121 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-03-31 04:30:36.351129 | orchestrator | 2026-03-31 04:30:36.351137 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-03-31 04:30:36.351145 | orchestrator | Tuesday 31 March 2026 04:30:32 +0000 (0:00:00.527) 0:00:01.161 ********* 2026-03-31 04:30:36.351153 | orchestrator | included: /ansible/roles/redis/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 04:30:36.351163 | orchestrator | 2026-03-31 04:30:36.351171 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-03-31 04:30:36.351179 | orchestrator | Tuesday 31 March 2026 04:30:32 +0000 (0:00:00.575) 0:00:01.737 ********* 2026-03-31 04:30:36.351190 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-31 04:30:36.351204 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-31 04:30:36.351226 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-31 04:30:36.351236 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-31 04:30:36.351266 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-31 04:30:36.351275 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-31 04:30:36.351283 | orchestrator | 2026-03-31 04:30:36.351292 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-03-31 04:30:36.351300 | orchestrator | Tuesday 31 March 2026 04:30:33 +0000 (0:00:01.181) 0:00:02.918 ********* 2026-03-31 04:30:36.351308 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-31 04:30:36.351317 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-31 04:30:36.351330 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-31 04:30:36.351339 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-31 04:30:36.351357 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-31 04:30:41.933558 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-31 04:30:41.933727 | orchestrator | 2026-03-31 04:30:41.933742 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-03-31 04:30:41.933751 | orchestrator | Tuesday 31 March 2026 04:30:36 +0000 (0:00:02.339) 0:00:05.257 ********* 2026-03-31 04:30:41.933760 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-31 04:30:41.933770 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-31 04:30:41.933777 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-31 04:30:41.933798 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-31 04:30:41.933827 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-31 04:30:41.933850 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-31 04:30:41.933857 | orchestrator | 2026-03-31 04:30:41.933865 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-03-31 04:30:41.933872 | orchestrator | Tuesday 31 March 2026 04:30:38 +0000 (0:00:02.509) 0:00:07.767 ********* 2026-03-31 04:30:41.933879 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-31 04:30:41.933887 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-31 04:30:41.933894 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-31 04:30:41.933930 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-31 04:30:41.933945 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-31 04:30:41.933959 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-31 04:30:42.307316 | orchestrator | 2026-03-31 04:30:42.307387 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-31 04:30:42.307394 | orchestrator | Tuesday 31 March 2026 04:30:41 +0000 (0:00:02.486) 0:00:10.253 ********* 2026-03-31 04:30:42.307399 | orchestrator | 2026-03-31 04:30:42.307403 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-31 04:30:42.307408 | orchestrator | Tuesday 31 March 2026 04:30:41 +0000 (0:00:00.076) 0:00:10.329 ********* 2026-03-31 04:30:42.307412 | orchestrator | 2026-03-31 04:30:42.307416 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-31 04:30:42.307420 | orchestrator | Tuesday 31 March 2026 04:30:41 +0000 (0:00:00.074) 0:00:10.404 ********* 2026-03-31 04:30:42.307424 | orchestrator | 2026-03-31 04:30:42.307428 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 04:30:42.307432 | orchestrator | testbed-node-0 : ok=7  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 04:30:42.307438 | orchestrator | testbed-node-1 : ok=7  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 04:30:42.307442 | orchestrator | testbed-node-2 : ok=7  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 04:30:42.307446 | orchestrator | 2026-03-31 04:30:42.307450 | orchestrator | 2026-03-31 04:30:42.307454 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 04:30:42.307458 | orchestrator | Tuesday 31 March 2026 04:30:41 +0000 (0:00:00.441) 0:00:10.845 ********* 2026-03-31 04:30:42.307462 | orchestrator | =============================================================================== 2026-03-31 04:30:42.307466 | orchestrator | redis : Copying over redis config files --------------------------------- 2.51s 2026-03-31 04:30:42.307470 | orchestrator | redis : Check redis containers ------------------------------------------ 2.49s 2026-03-31 04:30:42.307474 | orchestrator | redis : Copying over default config.json files -------------------------- 2.34s 2026-03-31 04:30:42.307478 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.18s 2026-03-31 04:30:42.307500 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.59s 2026-03-31 04:30:42.307504 | orchestrator | redis : include_tasks --------------------------------------------------- 0.58s 2026-03-31 04:30:42.307508 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.53s 2026-03-31 04:30:42.307512 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2026-03-31 04:30:42.656586 | orchestrator | + osism apply -a upgrade mariadb 2026-03-31 04:30:44.803831 | orchestrator | 2026-03-31 04:30:44 | INFO  | Task cc73712b-de54-4593-976c-e23366ffbf5d (mariadb) was prepared for execution. 2026-03-31 04:30:44.804004 | orchestrator | 2026-03-31 04:30:44 | INFO  | It takes a moment until task cc73712b-de54-4593-976c-e23366ffbf5d (mariadb) has been started and output is visible here. 2026-03-31 04:30:59.718151 | orchestrator | 2026-03-31 04:30:59.718303 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-31 04:30:59.718326 | orchestrator | 2026-03-31 04:30:59.718338 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-31 04:30:59.718350 | orchestrator | Tuesday 31 March 2026 04:30:49 +0000 (0:00:00.202) 0:00:00.202 ********* 2026-03-31 04:30:59.718362 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:30:59.718374 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:30:59.718385 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:30:59.718395 | orchestrator | 2026-03-31 04:30:59.718406 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-31 04:30:59.718418 | orchestrator | Tuesday 31 March 2026 04:30:49 +0000 (0:00:00.333) 0:00:00.536 ********* 2026-03-31 04:30:59.718429 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-31 04:30:59.718440 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-31 04:30:59.718451 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-31 04:30:59.718462 | orchestrator | 2026-03-31 04:30:59.718473 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-31 04:30:59.718484 | orchestrator | 2026-03-31 04:30:59.718495 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-31 04:30:59.718506 | orchestrator | Tuesday 31 March 2026 04:30:50 +0000 (0:00:00.640) 0:00:01.177 ********* 2026-03-31 04:30:59.718516 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-31 04:30:59.718527 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-31 04:30:59.718538 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-31 04:30:59.718549 | orchestrator | 2026-03-31 04:30:59.718560 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-31 04:30:59.718571 | orchestrator | Tuesday 31 March 2026 04:30:50 +0000 (0:00:00.405) 0:00:01.582 ********* 2026-03-31 04:30:59.718584 | orchestrator | included: /ansible/roles/mariadb/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 04:30:59.718596 | orchestrator | 2026-03-31 04:30:59.718609 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-03-31 04:30:59.718621 | orchestrator | Tuesday 31 March 2026 04:30:51 +0000 (0:00:00.622) 0:00:02.204 ********* 2026-03-31 04:30:59.718640 | orchestrator | ok: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-31 04:30:59.718723 | orchestrator | ok: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-31 04:30:59.718741 | orchestrator | ok: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-31 04:30:59.718763 | orchestrator | 2026-03-31 04:30:59.718776 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-03-31 04:30:59.718789 | orchestrator | Tuesday 31 March 2026 04:30:54 +0000 (0:00:03.016) 0:00:05.221 ********* 2026-03-31 04:30:59.718802 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:30:59.718815 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:30:59.718827 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:30:59.718840 | orchestrator | 2026-03-31 04:30:59.718852 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-03-31 04:30:59.718865 | orchestrator | Tuesday 31 March 2026 04:30:54 +0000 (0:00:00.648) 0:00:05.870 ********* 2026-03-31 04:30:59.718877 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:30:59.718890 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:30:59.718902 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:30:59.718915 | orchestrator | 2026-03-31 04:30:59.718928 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-03-31 04:30:59.718940 | orchestrator | Tuesday 31 March 2026 04:30:56 +0000 (0:00:01.293) 0:00:07.163 ********* 2026-03-31 04:30:59.719013 | orchestrator | ok: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-31 04:31:08.175399 | orchestrator | ok: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-31 04:31:08.175556 | orchestrator | ok: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-31 04:31:08.175576 | orchestrator | 2026-03-31 04:31:08.175590 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-03-31 04:31:08.175603 | orchestrator | Tuesday 31 March 2026 04:30:59 +0000 (0:00:03.528) 0:00:10.691 ********* 2026-03-31 04:31:08.175614 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:31:08.175626 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:31:08.175637 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:31:08.175649 | orchestrator | 2026-03-31 04:31:08.175660 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-03-31 04:31:08.175671 | orchestrator | Tuesday 31 March 2026 04:31:00 +0000 (0:00:01.088) 0:00:11.780 ********* 2026-03-31 04:31:08.175699 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:31:08.175720 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:31:08.175731 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:31:08.175742 | orchestrator | 2026-03-31 04:31:08.175753 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-31 04:31:08.175764 | orchestrator | Tuesday 31 March 2026 04:31:04 +0000 (0:00:04.038) 0:00:15.819 ********* 2026-03-31 04:31:08.175775 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 04:31:08.175786 | orchestrator | 2026-03-31 04:31:08.175797 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-31 04:31:08.175808 | orchestrator | Tuesday 31 March 2026 04:31:05 +0000 (0:00:00.800) 0:00:16.619 ********* 2026-03-31 04:31:08.175821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-31 04:31:08.175833 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:31:08.175858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-31 04:31:13.561110 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:31:13.561223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-31 04:31:13.561244 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:31:13.561257 | orchestrator | 2026-03-31 04:31:13.561270 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-31 04:31:13.561282 | orchestrator | Tuesday 31 March 2026 04:31:08 +0000 (0:00:02.525) 0:00:19.145 ********* 2026-03-31 04:31:13.561312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-31 04:31:13.561347 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:31:13.561378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-31 04:31:13.561392 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:31:13.561409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-31 04:31:13.561430 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:31:13.561441 | orchestrator | 2026-03-31 04:31:13.561452 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-31 04:31:13.561463 | orchestrator | Tuesday 31 March 2026 04:31:10 +0000 (0:00:02.527) 0:00:21.672 ********* 2026-03-31 04:31:13.561484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-31 04:31:16.999842 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:31:17.000076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-31 04:31:17.000149 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:31:17.000177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-31 04:31:17.000198 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:31:17.000217 | orchestrator | 2026-03-31 04:31:17.000236 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-03-31 04:31:17.000256 | orchestrator | Tuesday 31 March 2026 04:31:13 +0000 (0:00:02.867) 0:00:24.540 ********* 2026-03-31 04:31:17.000304 | orchestrator | ok: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-31 04:31:17.000344 | orchestrator | ok: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-31 04:31:17.000515 | orchestrator | ok: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-31 04:31:56.345051 | orchestrator | 2026-03-31 04:31:56.345218 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-03-31 04:31:56.345262 | orchestrator | Tuesday 31 March 2026 04:31:16 +0000 (0:00:03.439) 0:00:27.980 ********* 2026-03-31 04:31:56.345276 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:31:56.345288 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:31:56.345300 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:31:56.345311 | orchestrator | 2026-03-31 04:31:56.345323 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-03-31 04:31:56.345334 | orchestrator | Tuesday 31 March 2026 04:31:17 +0000 (0:00:00.837) 0:00:28.818 ********* 2026-03-31 04:31:56.345345 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:31:56.345357 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:31:56.345368 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:31:56.345379 | orchestrator | 2026-03-31 04:31:56.345391 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-03-31 04:31:56.345402 | orchestrator | Tuesday 31 March 2026 04:31:18 +0000 (0:00:00.336) 0:00:29.155 ********* 2026-03-31 04:31:56.345413 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:31:56.345424 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:31:56.345435 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:31:56.345446 | orchestrator | 2026-03-31 04:31:56.345458 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-03-31 04:31:56.345469 | orchestrator | Tuesday 31 March 2026 04:31:18 +0000 (0:00:00.559) 0:00:29.714 ********* 2026-03-31 04:31:56.345480 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:31:56.345491 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:31:56.345502 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:31:56.345513 | orchestrator | 2026-03-31 04:31:56.345525 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-03-31 04:31:56.345536 | orchestrator | Tuesday 31 March 2026 04:31:19 +0000 (0:00:00.895) 0:00:30.609 ********* 2026-03-31 04:31:56.345547 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:31:56.345558 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:31:56.345570 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:31:56.345581 | orchestrator | 2026-03-31 04:31:56.345594 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-03-31 04:31:56.345607 | orchestrator | Tuesday 31 March 2026 04:31:20 +0000 (0:00:00.493) 0:00:31.102 ********* 2026-03-31 04:31:56.345619 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:31:56.345633 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:31:56.345645 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:31:56.345657 | orchestrator | 2026-03-31 04:31:56.345670 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-03-31 04:31:56.345683 | orchestrator | Tuesday 31 March 2026 04:31:20 +0000 (0:00:00.442) 0:00:31.545 ********* 2026-03-31 04:31:56.345695 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:31:56.345708 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:31:56.345720 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:31:56.345732 | orchestrator | 2026-03-31 04:31:56.345752 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-03-31 04:31:56.345767 | orchestrator | Tuesday 31 March 2026 04:31:23 +0000 (0:00:03.183) 0:00:34.728 ********* 2026-03-31 04:31:56.345781 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:31:56.345793 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:31:56.345806 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:31:56.345817 | orchestrator | 2026-03-31 04:31:56.345828 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-03-31 04:31:56.345840 | orchestrator | Tuesday 31 March 2026 04:31:24 +0000 (0:00:00.480) 0:00:35.209 ********* 2026-03-31 04:31:56.345851 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:31:56.345862 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:31:56.345873 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:31:56.345885 | orchestrator | 2026-03-31 04:31:56.345896 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-03-31 04:31:56.345908 | orchestrator | Tuesday 31 March 2026 04:31:24 +0000 (0:00:00.480) 0:00:35.689 ********* 2026-03-31 04:31:56.345927 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:31:56.345939 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:31:56.345950 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:31:56.345961 | orchestrator | 2026-03-31 04:31:56.345991 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-31 04:31:56.346073 | orchestrator | Tuesday 31 March 2026 04:31:25 +0000 (0:00:00.691) 0:00:36.381 ********* 2026-03-31 04:31:56.346087 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:31:56.346098 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:31:56.346109 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:31:56.346136 | orchestrator | 2026-03-31 04:31:56.346148 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-31 04:31:56.346159 | orchestrator | Tuesday 31 March 2026 04:31:25 +0000 (0:00:00.311) 0:00:36.692 ********* 2026-03-31 04:31:56.346171 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-31 04:31:56.346182 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-03-31 04:31:56.346193 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-31 04:31:56.346205 | orchestrator | mariadb_bootstrap_restart 2026-03-31 04:31:56.346216 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:31:56.346227 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:31:56.346238 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:31:56.346249 | orchestrator | 2026-03-31 04:31:56.346261 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-31 04:31:56.346272 | orchestrator | skipping: no hosts matched 2026-03-31 04:31:56.346283 | orchestrator | 2026-03-31 04:31:56.346294 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-31 04:31:56.346305 | orchestrator | skipping: no hosts matched 2026-03-31 04:31:56.346316 | orchestrator | 2026-03-31 04:31:56.346327 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-31 04:31:56.346338 | orchestrator | skipping: no hosts matched 2026-03-31 04:31:56.346349 | orchestrator | 2026-03-31 04:31:56.346360 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-31 04:31:56.346371 | orchestrator | 2026-03-31 04:31:56.346397 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-31 04:31:56.346409 | orchestrator | Tuesday 31 March 2026 04:31:26 +0000 (0:00:00.690) 0:00:37.383 ********* 2026-03-31 04:31:56.346490 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 04:31:56.346505 | orchestrator | 2026-03-31 04:31:56.346516 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-03-31 04:31:56.346527 | orchestrator | Tuesday 31 March 2026 04:31:27 +0000 (0:00:00.792) 0:00:38.176 ********* 2026-03-31 04:31:56.346538 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:31:56.346549 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:31:56.346561 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:31:56.346572 | orchestrator | 2026-03-31 04:31:56.346583 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-03-31 04:31:56.346594 | orchestrator | Tuesday 31 March 2026 04:31:29 +0000 (0:00:02.374) 0:00:40.550 ********* 2026-03-31 04:31:56.346605 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:31:56.346616 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:31:56.346627 | orchestrator | changed: [testbed-node-0] 2026-03-31 04:31:56.346638 | orchestrator | 2026-03-31 04:31:56.346649 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-03-31 04:31:56.346660 | orchestrator | Tuesday 31 March 2026 04:31:31 +0000 (0:00:02.332) 0:00:42.883 ********* 2026-03-31 04:31:56.346671 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:31:56.346681 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:31:56.346692 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:31:56.346703 | orchestrator | 2026-03-31 04:31:56.346714 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-03-31 04:31:56.346734 | orchestrator | Tuesday 31 March 2026 04:31:34 +0000 (0:00:02.383) 0:00:45.266 ********* 2026-03-31 04:31:56.346745 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:31:56.346756 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:31:56.346767 | orchestrator | changed: [testbed-node-0] 2026-03-31 04:31:56.346778 | orchestrator | 2026-03-31 04:31:56.346789 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-31 04:31:56.346800 | orchestrator | Tuesday 31 March 2026 04:31:36 +0000 (0:00:02.241) 0:00:47.508 ********* 2026-03-31 04:31:56.346811 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:31:56.346822 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:31:56.346833 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:31:56.346844 | orchestrator | 2026-03-31 04:31:56.346856 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-31 04:31:56.346875 | orchestrator | Tuesday 31 March 2026 04:31:39 +0000 (0:00:03.407) 0:00:50.915 ********* 2026-03-31 04:31:56.346887 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 04:31:56.346898 | orchestrator | 2026-03-31 04:31:56.346909 | orchestrator | TASK [mariadb : Run upgrade in MariaDB container] ****************************** 2026-03-31 04:31:56.346920 | orchestrator | Tuesday 31 March 2026 04:31:40 +0000 (0:00:00.603) 0:00:51.519 ********* 2026-03-31 04:31:56.346931 | orchestrator | changed: [testbed-node-0] 2026-03-31 04:31:56.346942 | orchestrator | changed: [testbed-node-1] 2026-03-31 04:31:56.346953 | orchestrator | changed: [testbed-node-2] 2026-03-31 04:31:56.346964 | orchestrator | 2026-03-31 04:31:56.346975 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 04:31:56.346987 | orchestrator | testbed-node-0 : ok=28  changed=3  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-31 04:31:56.347000 | orchestrator | testbed-node-1 : ok=20  changed=1  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2026-03-31 04:31:56.347011 | orchestrator | testbed-node-2 : ok=20  changed=1  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2026-03-31 04:31:56.347021 | orchestrator | 2026-03-31 04:31:56.347032 | orchestrator | 2026-03-31 04:31:56.347043 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 04:31:56.347054 | orchestrator | Tuesday 31 March 2026 04:31:56 +0000 (0:00:15.771) 0:01:07.290 ********* 2026-03-31 04:31:56.347065 | orchestrator | =============================================================================== 2026-03-31 04:31:56.347076 | orchestrator | mariadb : Run upgrade in MariaDB container ----------------------------- 15.77s 2026-03-31 04:31:56.347087 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.04s 2026-03-31 04:31:56.347098 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.53s 2026-03-31 04:31:56.347109 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.44s 2026-03-31 04:31:56.347167 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.41s 2026-03-31 04:31:56.347179 | orchestrator | mariadb : Check MariaDB service WSREP sync status ----------------------- 3.18s 2026-03-31 04:31:56.347190 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.02s 2026-03-31 04:31:56.347201 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.87s 2026-03-31 04:31:56.347212 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.53s 2026-03-31 04:31:56.347223 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.53s 2026-03-31 04:31:56.347234 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.38s 2026-03-31 04:31:56.347244 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.37s 2026-03-31 04:31:56.347255 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.33s 2026-03-31 04:31:56.347266 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.24s 2026-03-31 04:31:56.347292 | orchestrator | mariadb : Copying over my.cnf for mariabackup --------------------------- 1.29s 2026-03-31 04:31:56.347303 | orchestrator | mariadb : Copying over config.json files for mariabackup ---------------- 1.09s 2026-03-31 04:31:56.347322 | orchestrator | mariadb : Check MariaDB service port liveness --------------------------- 0.90s 2026-03-31 04:31:56.668969 | orchestrator | mariadb : Create MariaDB volume ----------------------------------------- 0.84s 2026-03-31 04:31:56.669068 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.80s 2026-03-31 04:31:56.669083 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.79s 2026-03-31 04:31:56.969455 | orchestrator | + osism apply -a upgrade rabbitmq 2026-03-31 04:31:59.017108 | orchestrator | 2026-03-31 04:31:59 | INFO  | Task 7a317919-c134-404d-8e2d-e43793241c45 (rabbitmq) was prepared for execution. 2026-03-31 04:31:59.017252 | orchestrator | 2026-03-31 04:31:59 | INFO  | It takes a moment until task 7a317919-c134-404d-8e2d-e43793241c45 (rabbitmq) has been started and output is visible here. 2026-03-31 04:32:18.469944 | orchestrator | 2026-03-31 04:32:18.470124 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-31 04:32:18.470145 | orchestrator | 2026-03-31 04:32:18.470157 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-31 04:32:18.470168 | orchestrator | Tuesday 31 March 2026 04:32:03 +0000 (0:00:00.192) 0:00:00.192 ********* 2026-03-31 04:32:18.470319 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:32:18.470337 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:32:18.470349 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:32:18.470360 | orchestrator | 2026-03-31 04:32:18.470372 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-31 04:32:18.470383 | orchestrator | Tuesday 31 March 2026 04:32:03 +0000 (0:00:00.366) 0:00:00.559 ********* 2026-03-31 04:32:18.470395 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-03-31 04:32:18.470407 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-03-31 04:32:18.470418 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-03-31 04:32:18.470428 | orchestrator | 2026-03-31 04:32:18.470440 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-03-31 04:32:18.470451 | orchestrator | 2026-03-31 04:32:18.470465 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-31 04:32:18.470477 | orchestrator | Tuesday 31 March 2026 04:32:04 +0000 (0:00:00.609) 0:00:01.168 ********* 2026-03-31 04:32:18.470491 | orchestrator | included: /ansible/roles/rabbitmq/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 04:32:18.470505 | orchestrator | 2026-03-31 04:32:18.470518 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-31 04:32:18.470530 | orchestrator | Tuesday 31 March 2026 04:32:04 +0000 (0:00:00.568) 0:00:01.737 ********* 2026-03-31 04:32:18.470543 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:32:18.470555 | orchestrator | 2026-03-31 04:32:18.470568 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-03-31 04:32:18.470581 | orchestrator | Tuesday 31 March 2026 04:32:05 +0000 (0:00:01.100) 0:00:02.838 ********* 2026-03-31 04:32:18.470593 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:32:18.470606 | orchestrator | 2026-03-31 04:32:18.470618 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-03-31 04:32:18.470630 | orchestrator | Tuesday 31 March 2026 04:32:07 +0000 (0:00:02.149) 0:00:04.987 ********* 2026-03-31 04:32:18.470643 | orchestrator | changed: [testbed-node-0] 2026-03-31 04:32:18.470657 | orchestrator | 2026-03-31 04:32:18.470670 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-03-31 04:32:18.470683 | orchestrator | Tuesday 31 March 2026 04:32:11 +0000 (0:00:03.268) 0:00:08.255 ********* 2026-03-31 04:32:18.470696 | orchestrator | ok: [testbed-node-0] => { 2026-03-31 04:32:18.470708 | orchestrator |  "changed": false, 2026-03-31 04:32:18.470746 | orchestrator |  "msg": "All assertions passed" 2026-03-31 04:32:18.470760 | orchestrator | } 2026-03-31 04:32:18.470774 | orchestrator | 2026-03-31 04:32:18.470786 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-03-31 04:32:18.470799 | orchestrator | Tuesday 31 March 2026 04:32:11 +0000 (0:00:00.426) 0:00:08.682 ********* 2026-03-31 04:32:18.470811 | orchestrator | ok: [testbed-node-0] => { 2026-03-31 04:32:18.470825 | orchestrator |  "changed": false, 2026-03-31 04:32:18.470837 | orchestrator |  "msg": "All assertions passed" 2026-03-31 04:32:18.470848 | orchestrator | } 2026-03-31 04:32:18.470859 | orchestrator | 2026-03-31 04:32:18.470870 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-31 04:32:18.470881 | orchestrator | Tuesday 31 March 2026 04:32:12 +0000 (0:00:00.402) 0:00:09.085 ********* 2026-03-31 04:32:18.470892 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 04:32:18.470904 | orchestrator | 2026-03-31 04:32:18.470915 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-31 04:32:18.470926 | orchestrator | Tuesday 31 March 2026 04:32:12 +0000 (0:00:00.724) 0:00:09.810 ********* 2026-03-31 04:32:18.470938 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:32:18.470949 | orchestrator | 2026-03-31 04:32:18.470960 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-03-31 04:32:18.470971 | orchestrator | Tuesday 31 March 2026 04:32:13 +0000 (0:00:00.894) 0:00:10.705 ********* 2026-03-31 04:32:18.470982 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:32:18.470993 | orchestrator | 2026-03-31 04:32:18.471004 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-03-31 04:32:18.471014 | orchestrator | Tuesday 31 March 2026 04:32:15 +0000 (0:00:01.795) 0:00:12.500 ********* 2026-03-31 04:32:18.471025 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:32:18.471036 | orchestrator | 2026-03-31 04:32:18.471047 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-03-31 04:32:18.471058 | orchestrator | Tuesday 31 March 2026 04:32:16 +0000 (0:00:00.767) 0:00:13.268 ********* 2026-03-31 04:32:18.471113 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-31 04:32:18.471132 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-31 04:32:18.471155 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-31 04:32:18.471167 | orchestrator | 2026-03-31 04:32:18.471203 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-03-31 04:32:18.471214 | orchestrator | Tuesday 31 March 2026 04:32:17 +0000 (0:00:00.875) 0:00:14.144 ********* 2026-03-31 04:32:18.471232 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-31 04:32:18.471254 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-31 04:32:32.643954 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-31 04:32:32.644112 | orchestrator | 2026-03-31 04:32:32.644131 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-03-31 04:32:32.644146 | orchestrator | Tuesday 31 March 2026 04:32:18 +0000 (0:00:01.349) 0:00:15.493 ********* 2026-03-31 04:32:32.644157 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-31 04:32:32.644169 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-31 04:32:32.644180 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-31 04:32:32.644191 | orchestrator | 2026-03-31 04:32:32.644202 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-03-31 04:32:32.644213 | orchestrator | Tuesday 31 March 2026 04:32:19 +0000 (0:00:01.442) 0:00:16.936 ********* 2026-03-31 04:32:32.644282 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-31 04:32:32.644294 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-31 04:32:32.644305 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-31 04:32:32.644316 | orchestrator | 2026-03-31 04:32:32.644327 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-03-31 04:32:32.644338 | orchestrator | Tuesday 31 March 2026 04:32:22 +0000 (0:00:02.323) 0:00:19.259 ********* 2026-03-31 04:32:32.644350 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-31 04:32:32.644360 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-31 04:32:32.644371 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-31 04:32:32.644382 | orchestrator | 2026-03-31 04:32:32.644393 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-03-31 04:32:32.644404 | orchestrator | Tuesday 31 March 2026 04:32:23 +0000 (0:00:01.298) 0:00:20.558 ********* 2026-03-31 04:32:32.644415 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-31 04:32:32.644440 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-31 04:32:32.644452 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-31 04:32:32.644463 | orchestrator | 2026-03-31 04:32:32.644474 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-03-31 04:32:32.644487 | orchestrator | Tuesday 31 March 2026 04:32:24 +0000 (0:00:01.435) 0:00:21.993 ********* 2026-03-31 04:32:32.644499 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-31 04:32:32.644512 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-31 04:32:32.644525 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-31 04:32:32.644537 | orchestrator | 2026-03-31 04:32:32.644549 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-03-31 04:32:32.644572 | orchestrator | Tuesday 31 March 2026 04:32:26 +0000 (0:00:01.380) 0:00:23.373 ********* 2026-03-31 04:32:32.644584 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-31 04:32:32.644597 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-31 04:32:32.644609 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-31 04:32:32.644622 | orchestrator | 2026-03-31 04:32:32.644634 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-31 04:32:32.644647 | orchestrator | Tuesday 31 March 2026 04:32:27 +0000 (0:00:01.639) 0:00:25.013 ********* 2026-03-31 04:32:32.644660 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:32:32.644673 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:32:32.644686 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:32:32.644699 | orchestrator | 2026-03-31 04:32:32.644730 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-31 04:32:32.644743 | orchestrator | Tuesday 31 March 2026 04:32:28 +0000 (0:00:00.447) 0:00:25.460 ********* 2026-03-31 04:32:32.644754 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:32:32.644766 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:32:32.644777 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:32:32.644788 | orchestrator | 2026-03-31 04:32:32.644799 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-03-31 04:32:32.644810 | orchestrator | Tuesday 31 March 2026 04:32:30 +0000 (0:00:02.025) 0:00:27.486 ********* 2026-03-31 04:32:32.644820 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: rabbitmq_restart 2026-03-31 04:32:32.644833 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-31 04:32:32.644847 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-31 04:32:32.644866 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-31 04:32:32.644886 | orchestrator | 2026-03-31 04:32:32.644898 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-31 04:32:32.644909 | orchestrator | skipping: no hosts matched 2026-03-31 04:32:32.644920 | orchestrator | 2026-03-31 04:32:32.644931 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-03-31 04:32:32.644942 | orchestrator | 2026-03-31 04:32:32.644953 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-03-31 04:32:32.644964 | orchestrator | Tuesday 31 March 2026 04:32:32 +0000 (0:00:01.945) 0:00:29.431 ********* 2026-03-31 04:32:32.644975 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:32:32.644992 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-31 04:32:32.994949 | orchestrator | enable_outward_rabbitmq_True 2026-03-31 04:32:32.995055 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-31 04:32:32.995078 | orchestrator | outward_rabbitmq_restart 2026-03-31 04:32:32.995099 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:32:32.995112 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:32:32.995124 | orchestrator | 2026-03-31 04:32:32.995136 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-03-31 04:32:32.995148 | orchestrator | skipping: no hosts matched 2026-03-31 04:32:32.995159 | orchestrator | 2026-03-31 04:32:32.995170 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-03-31 04:32:32.995182 | orchestrator | skipping: no hosts matched 2026-03-31 04:32:32.995194 | orchestrator | 2026-03-31 04:32:32.995205 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-03-31 04:32:32.995241 | orchestrator | skipping: no hosts matched 2026-03-31 04:32:32.995255 | orchestrator | 2026-03-31 04:32:32.995266 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 04:32:32.995279 | orchestrator | testbed-node-0 : ok=21  changed=1  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-31 04:32:32.995292 | orchestrator | testbed-node-1 : ok=14  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 04:32:32.995303 | orchestrator | testbed-node-2 : ok=14  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 04:32:32.995314 | orchestrator | 2026-03-31 04:32:32.995325 | orchestrator | 2026-03-31 04:32:32.995336 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 04:32:32.995347 | orchestrator | Tuesday 31 March 2026 04:32:32 +0000 (0:00:00.245) 0:00:29.677 ********* 2026-03-31 04:32:32.995358 | orchestrator | =============================================================================== 2026-03-31 04:32:32.995369 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------- 3.27s 2026-03-31 04:32:32.995380 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.32s 2026-03-31 04:32:32.995391 | orchestrator | rabbitmq : Get current RabbitMQ version --------------------------------- 2.15s 2026-03-31 04:32:32.995431 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.03s 2026-03-31 04:32:32.995443 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.95s 2026-03-31 04:32:32.995453 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 1.80s 2026-03-31 04:32:32.995464 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.64s 2026-03-31 04:32:32.995475 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.44s 2026-03-31 04:32:32.995486 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.44s 2026-03-31 04:32:32.995497 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.38s 2026-03-31 04:32:32.995508 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.35s 2026-03-31 04:32:32.995519 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.30s 2026-03-31 04:32:32.995531 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.10s 2026-03-31 04:32:32.995550 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.89s 2026-03-31 04:32:32.995567 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.88s 2026-03-31 04:32:32.995584 | orchestrator | rabbitmq : Remove ha-all policy from RabbitMQ --------------------------- 0.77s 2026-03-31 04:32:32.995620 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.72s 2026-03-31 04:32:32.995640 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.61s 2026-03-31 04:32:32.995659 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.57s 2026-03-31 04:32:32.995677 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.45s 2026-03-31 04:32:33.307378 | orchestrator | + osism apply -a upgrade openvswitch 2026-03-31 04:32:35.287088 | orchestrator | 2026-03-31 04:32:35 | INFO  | Task ab34d532-e36e-4fd3-a47f-ff71cdc9fcd3 (openvswitch) was prepared for execution. 2026-03-31 04:32:35.287183 | orchestrator | 2026-03-31 04:32:35 | INFO  | It takes a moment until task ab34d532-e36e-4fd3-a47f-ff71cdc9fcd3 (openvswitch) has been started and output is visible here. 2026-03-31 04:32:49.039110 | orchestrator | 2026-03-31 04:32:49.039319 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-31 04:32:49.039336 | orchestrator | 2026-03-31 04:32:49.039345 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-31 04:32:49.039352 | orchestrator | Tuesday 31 March 2026 04:32:39 +0000 (0:00:00.336) 0:00:00.336 ********* 2026-03-31 04:32:49.039359 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:32:49.039368 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:32:49.039375 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:32:49.039382 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:32:49.039388 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:32:49.039395 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:32:49.039402 | orchestrator | 2026-03-31 04:32:49.039409 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-31 04:32:49.039416 | orchestrator | Tuesday 31 March 2026 04:32:40 +0000 (0:00:00.791) 0:00:01.127 ********* 2026-03-31 04:32:49.039423 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-31 04:32:49.039431 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-31 04:32:49.039438 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-31 04:32:49.039445 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-31 04:32:49.039452 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-31 04:32:49.039459 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-31 04:32:49.039465 | orchestrator | 2026-03-31 04:32:49.039472 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-03-31 04:32:49.039498 | orchestrator | 2026-03-31 04:32:49.039506 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-03-31 04:32:49.039512 | orchestrator | Tuesday 31 March 2026 04:32:41 +0000 (0:00:00.741) 0:00:01.869 ********* 2026-03-31 04:32:49.039520 | orchestrator | included: /ansible/roles/openvswitch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 04:32:49.039528 | orchestrator | 2026-03-31 04:32:49.039535 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-31 04:32:49.039542 | orchestrator | Tuesday 31 March 2026 04:32:42 +0000 (0:00:01.213) 0:00:03.083 ********* 2026-03-31 04:32:49.039549 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-03-31 04:32:49.039556 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-03-31 04:32:49.039563 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-03-31 04:32:49.039569 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-03-31 04:32:49.039576 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-03-31 04:32:49.039583 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-03-31 04:32:49.039589 | orchestrator | 2026-03-31 04:32:49.039596 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-31 04:32:49.039603 | orchestrator | Tuesday 31 March 2026 04:32:43 +0000 (0:00:01.175) 0:00:04.258 ********* 2026-03-31 04:32:49.039610 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-03-31 04:32:49.039616 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-03-31 04:32:49.039623 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-03-31 04:32:49.039630 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-03-31 04:32:49.039637 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-03-31 04:32:49.039643 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-03-31 04:32:49.039650 | orchestrator | 2026-03-31 04:32:49.039657 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-31 04:32:49.039665 | orchestrator | Tuesday 31 March 2026 04:32:44 +0000 (0:00:01.457) 0:00:05.716 ********* 2026-03-31 04:32:49.039674 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-03-31 04:32:49.039682 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:32:49.039690 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-03-31 04:32:49.039698 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:32:49.039705 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-03-31 04:32:49.039713 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:32:49.039721 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-03-31 04:32:49.039728 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:32:49.039736 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-03-31 04:32:49.039744 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:32:49.039751 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-03-31 04:32:49.039759 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:32:49.039767 | orchestrator | 2026-03-31 04:32:49.039775 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-03-31 04:32:49.039783 | orchestrator | Tuesday 31 March 2026 04:32:46 +0000 (0:00:01.762) 0:00:07.479 ********* 2026-03-31 04:32:49.039791 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:32:49.039811 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:32:49.039819 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:32:49.039826 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:32:49.039835 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:32:49.039843 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:32:49.039851 | orchestrator | 2026-03-31 04:32:49.039858 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-03-31 04:32:49.039866 | orchestrator | Tuesday 31 March 2026 04:32:47 +0000 (0:00:00.659) 0:00:08.139 ********* 2026-03-31 04:32:49.039894 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-31 04:32:49.039915 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-31 04:32:49.039924 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-31 04:32:49.039932 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-31 04:32:49.039941 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-31 04:32:49.039955 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-31 04:32:49.039975 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-31 04:32:51.254880 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-31 04:32:51.255012 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-31 04:32:51.255037 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-31 04:32:51.255057 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-31 04:32:51.255097 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-31 04:32:51.255146 | orchestrator | 2026-03-31 04:32:51.255161 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-03-31 04:32:51.255174 | orchestrator | Tuesday 31 March 2026 04:32:49 +0000 (0:00:01.698) 0:00:09.837 ********* 2026-03-31 04:32:51.255205 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-31 04:32:51.255218 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-31 04:32:51.255230 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-31 04:32:51.255242 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-31 04:32:51.255259 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-31 04:32:51.255314 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-31 04:32:51.255336 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-31 04:32:54.719716 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-31 04:32:54.719826 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-31 04:32:54.719843 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-31 04:32:54.719897 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-31 04:32:54.719911 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-31 04:32:54.719923 | orchestrator | 2026-03-31 04:32:54.719937 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-03-31 04:32:54.719950 | orchestrator | Tuesday 31 March 2026 04:32:51 +0000 (0:00:02.324) 0:00:12.162 ********* 2026-03-31 04:32:54.719961 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:32:54.719974 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:32:54.719985 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:32:54.719995 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:32:54.720006 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:32:54.720017 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:32:54.720045 | orchestrator | 2026-03-31 04:32:54.720057 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-03-31 04:32:54.720068 | orchestrator | Tuesday 31 March 2026 04:32:52 +0000 (0:00:00.961) 0:00:13.123 ********* 2026-03-31 04:32:54.720099 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-31 04:32:54.720113 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-31 04:32:54.720125 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-31 04:32:54.720151 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-31 04:32:54.720163 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-31 04:32:54.720183 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-31 04:33:05.166494 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-31 04:33:05.166628 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-31 04:33:05.166688 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-31 04:33:05.166703 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-31 04:33:05.166716 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-31 04:33:05.166745 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-31 04:33:05.166767 | orchestrator | 2026-03-31 04:33:05.166782 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-31 04:33:05.166795 | orchestrator | Tuesday 31 March 2026 04:32:54 +0000 (0:00:02.496) 0:00:15.620 ********* 2026-03-31 04:33:05.166807 | orchestrator | 2026-03-31 04:33:05.166818 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-31 04:33:05.166829 | orchestrator | Tuesday 31 March 2026 04:32:55 +0000 (0:00:00.314) 0:00:15.935 ********* 2026-03-31 04:33:05.166840 | orchestrator | 2026-03-31 04:33:05.166852 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-31 04:33:05.166863 | orchestrator | Tuesday 31 March 2026 04:32:55 +0000 (0:00:00.174) 0:00:16.109 ********* 2026-03-31 04:33:05.166874 | orchestrator | 2026-03-31 04:33:05.166886 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-31 04:33:05.166906 | orchestrator | Tuesday 31 March 2026 04:32:55 +0000 (0:00:00.151) 0:00:16.260 ********* 2026-03-31 04:33:05.166918 | orchestrator | 2026-03-31 04:33:05.166929 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-31 04:33:05.166940 | orchestrator | Tuesday 31 March 2026 04:32:55 +0000 (0:00:00.150) 0:00:16.410 ********* 2026-03-31 04:33:05.166951 | orchestrator | 2026-03-31 04:33:05.166962 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-31 04:33:05.166973 | orchestrator | Tuesday 31 March 2026 04:32:55 +0000 (0:00:00.145) 0:00:16.555 ********* 2026-03-31 04:33:05.166984 | orchestrator | 2026-03-31 04:33:05.166995 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-03-31 04:33:05.167006 | orchestrator | Tuesday 31 March 2026 04:32:55 +0000 (0:00:00.172) 0:00:16.728 ********* 2026-03-31 04:33:05.167018 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-03-31 04:33:05.167032 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-03-31 04:33:05.167045 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-03-31 04:33:05.167058 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-03-31 04:33:05.167070 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-03-31 04:33:05.167084 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-03-31 04:33:05.167096 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-03-31 04:33:05.167115 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-03-31 04:33:05.167129 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-03-31 04:33:05.167142 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-03-31 04:33:05.167155 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-03-31 04:33:05.167168 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-03-31 04:33:05.167181 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-31 04:33:05.167194 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-31 04:33:05.167208 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-31 04:33:05.167225 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-31 04:33:05.167244 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-31 04:33:05.167263 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-31 04:33:05.167280 | orchestrator | 2026-03-31 04:33:05.167324 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-03-31 04:33:05.167342 | orchestrator | Tuesday 31 March 2026 04:33:02 +0000 (0:00:06.799) 0:00:23.528 ********* 2026-03-31 04:33:05.167365 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-03-31 04:33:05.167383 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:33:05.167402 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-03-31 04:33:05.167421 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:33:05.167453 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-03-31 04:33:05.167472 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:33:05.167490 | orchestrator | ok: [testbed-node-0] => (item=br-ex) 2026-03-31 04:33:05.167510 | orchestrator | ok: [testbed-node-1] => (item=br-ex) 2026-03-31 04:33:05.167530 | orchestrator | ok: [testbed-node-2] => (item=br-ex) 2026-03-31 04:33:05.167549 | orchestrator | 2026-03-31 04:33:05.167560 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-03-31 04:33:05.167582 | orchestrator | Tuesday 31 March 2026 04:33:05 +0000 (0:00:02.433) 0:00:25.961 ********* 2026-03-31 04:33:08.342444 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-03-31 04:33:08.342553 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:33:08.342570 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-03-31 04:33:08.342582 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:33:08.342593 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-03-31 04:33:08.342604 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:33:08.342615 | orchestrator | ok: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-03-31 04:33:08.342628 | orchestrator | ok: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-03-31 04:33:08.342639 | orchestrator | ok: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-03-31 04:33:08.342650 | orchestrator | 2026-03-31 04:33:08.342662 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 04:33:08.342674 | orchestrator | testbed-node-0 : ok=11  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-31 04:33:08.342686 | orchestrator | testbed-node-1 : ok=11  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-31 04:33:08.342698 | orchestrator | testbed-node-2 : ok=11  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-31 04:33:08.342708 | orchestrator | testbed-node-3 : ok=9  changed=0 unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-31 04:33:08.342720 | orchestrator | testbed-node-4 : ok=9  changed=0 unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-31 04:33:08.342730 | orchestrator | testbed-node-5 : ok=9  changed=0 unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-31 04:33:08.342741 | orchestrator | 2026-03-31 04:33:08.342752 | orchestrator | 2026-03-31 04:33:08.342763 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 04:33:08.342774 | orchestrator | Tuesday 31 March 2026 04:33:07 +0000 (0:00:02.786) 0:00:28.748 ********* 2026-03-31 04:33:08.342785 | orchestrator | =============================================================================== 2026-03-31 04:33:08.342796 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.80s 2026-03-31 04:33:08.342807 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 2.79s 2026-03-31 04:33:08.342818 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.50s 2026-03-31 04:33:08.342829 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.43s 2026-03-31 04:33:08.342860 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.32s 2026-03-31 04:33:08.342871 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.76s 2026-03-31 04:33:08.342882 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.70s 2026-03-31 04:33:08.342893 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.46s 2026-03-31 04:33:08.342904 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.21s 2026-03-31 04:33:08.342915 | orchestrator | module-load : Load modules ---------------------------------------------- 1.18s 2026-03-31 04:33:08.342948 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.11s 2026-03-31 04:33:08.342960 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.96s 2026-03-31 04:33:08.342971 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.79s 2026-03-31 04:33:08.342982 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.74s 2026-03-31 04:33:08.342993 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.66s 2026-03-31 04:33:08.633941 | orchestrator | + osism apply -a upgrade ovn 2026-03-31 04:33:10.607109 | orchestrator | 2026-03-31 04:33:10 | INFO  | Task 02cdd78a-90fc-4aeb-a40a-e3b9a08a56e0 (ovn) was prepared for execution. 2026-03-31 04:33:10.607218 | orchestrator | 2026-03-31 04:33:10 | INFO  | It takes a moment until task 02cdd78a-90fc-4aeb-a40a-e3b9a08a56e0 (ovn) has been started and output is visible here. 2026-03-31 04:33:21.626152 | orchestrator | 2026-03-31 04:33:21.626308 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-31 04:33:21.626373 | orchestrator | 2026-03-31 04:33:21.626397 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-31 04:33:21.626415 | orchestrator | Tuesday 31 March 2026 04:33:14 +0000 (0:00:00.175) 0:00:00.175 ********* 2026-03-31 04:33:21.626433 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:33:21.626453 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:33:21.626471 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:33:21.626489 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:33:21.626509 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:33:21.626528 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:33:21.626547 | orchestrator | 2026-03-31 04:33:21.626566 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-31 04:33:21.626578 | orchestrator | Tuesday 31 March 2026 04:33:15 +0000 (0:00:00.770) 0:00:00.945 ********* 2026-03-31 04:33:21.626589 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-03-31 04:33:21.626603 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-03-31 04:33:21.626616 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-03-31 04:33:21.626630 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-03-31 04:33:21.626642 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-03-31 04:33:21.626655 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-03-31 04:33:21.626667 | orchestrator | 2026-03-31 04:33:21.626680 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-03-31 04:33:21.626693 | orchestrator | 2026-03-31 04:33:21.626705 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-03-31 04:33:21.626718 | orchestrator | Tuesday 31 March 2026 04:33:16 +0000 (0:00:00.850) 0:00:01.796 ********* 2026-03-31 04:33:21.626731 | orchestrator | included: /ansible/roles/ovn-controller/tasks/upgrade.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 04:33:21.626745 | orchestrator | 2026-03-31 04:33:21.626758 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-03-31 04:33:21.626771 | orchestrator | Tuesday 31 March 2026 04:33:17 +0000 (0:00:01.227) 0:00:03.023 ********* 2026-03-31 04:33:21.626786 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:33:21.626803 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:33:21.626844 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:33:21.626871 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:33:21.626883 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:33:21.626918 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:33:21.626930 | orchestrator | 2026-03-31 04:33:21.626942 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-03-31 04:33:21.626953 | orchestrator | Tuesday 31 March 2026 04:33:18 +0000 (0:00:01.272) 0:00:04.296 ********* 2026-03-31 04:33:21.626964 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:33:21.626975 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:33:21.626986 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:33:21.626998 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:33:21.627017 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:33:21.627028 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:33:21.627039 | orchestrator | 2026-03-31 04:33:21.627058 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-03-31 04:33:21.627077 | orchestrator | Tuesday 31 March 2026 04:33:20 +0000 (0:00:01.536) 0:00:05.833 ********* 2026-03-31 04:33:21.627096 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:33:21.627115 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:33:21.627146 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:33:47.721826 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:33:47.721942 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:33:47.721958 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:33:47.721971 | orchestrator | 2026-03-31 04:33:47.722009 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-03-31 04:33:47.722074 | orchestrator | Tuesday 31 March 2026 04:33:21 +0000 (0:00:01.330) 0:00:07.163 ********* 2026-03-31 04:33:47.722084 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:33:47.722094 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:33:47.722119 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:33:47.722131 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:33:47.722142 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:33:47.722173 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:33:47.722185 | orchestrator | 2026-03-31 04:33:47.722191 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-03-31 04:33:47.722198 | orchestrator | Tuesday 31 March 2026 04:33:23 +0000 (0:00:01.684) 0:00:08.848 ********* 2026-03-31 04:33:47.722204 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:33:47.722211 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:33:47.722226 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:33:47.722232 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:33:47.722239 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:33:47.722249 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:33:47.722256 | orchestrator | 2026-03-31 04:33:47.722262 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-03-31 04:33:47.722268 | orchestrator | Tuesday 31 March 2026 04:33:25 +0000 (0:00:01.756) 0:00:10.605 ********* 2026-03-31 04:33:47.722275 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:33:47.722282 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:33:47.722288 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:33:47.722295 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:33:47.722301 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:33:47.722307 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:33:47.722313 | orchestrator | 2026-03-31 04:33:47.722320 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-03-31 04:33:47.722326 | orchestrator | Tuesday 31 March 2026 04:33:27 +0000 (0:00:02.758) 0:00:13.363 ********* 2026-03-31 04:33:47.722333 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-03-31 04:33:47.722340 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-03-31 04:33:47.722347 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-03-31 04:33:47.722353 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-03-31 04:33:47.722359 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-03-31 04:33:47.722365 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-03-31 04:33:47.722372 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-31 04:33:47.722378 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-31 04:33:47.722389 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-31 04:33:51.747771 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-31 04:33:51.747873 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-31 04:33:51.747913 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-31 04:33:51.747927 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-31 04:33:51.747941 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-31 04:33:51.747952 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-31 04:33:51.747964 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-31 04:33:51.747975 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-31 04:33:51.747986 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-31 04:33:51.747998 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-31 04:33:51.748009 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-31 04:33:51.748020 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-31 04:33:51.748031 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-31 04:33:51.748042 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-31 04:33:51.748053 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-31 04:33:51.748063 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-31 04:33:51.748074 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-31 04:33:51.748085 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-31 04:33:51.748096 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-31 04:33:51.748107 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-31 04:33:51.748118 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-31 04:33:51.748129 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-31 04:33:51.748140 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-31 04:33:51.748151 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-31 04:33:51.748177 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-31 04:33:51.748188 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-31 04:33:51.748199 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-31 04:33:51.748210 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-31 04:33:51.748222 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-31 04:33:51.748233 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-31 04:33:51.748244 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-31 04:33:51.748255 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-31 04:33:51.748274 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-03-31 04:33:51.748285 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-31 04:33:51.748296 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-03-31 04:33:51.748307 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-03-31 04:33:51.748337 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-03-31 04:33:51.748351 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-03-31 04:33:51.748364 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-31 04:33:51.748377 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-31 04:33:51.748389 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-31 04:33:51.748401 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-03-31 04:33:51.748439 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-31 04:33:51.748453 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-31 04:33:51.748467 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-31 04:33:51.748479 | orchestrator | 2026-03-31 04:33:51.748493 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-31 04:33:51.748506 | orchestrator | Tuesday 31 March 2026 04:33:46 +0000 (0:00:18.821) 0:00:32.185 ********* 2026-03-31 04:33:51.748518 | orchestrator | 2026-03-31 04:33:51.748530 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-31 04:33:51.748543 | orchestrator | Tuesday 31 March 2026 04:33:46 +0000 (0:00:00.075) 0:00:32.260 ********* 2026-03-31 04:33:51.748555 | orchestrator | 2026-03-31 04:33:51.748567 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-31 04:33:51.748579 | orchestrator | Tuesday 31 March 2026 04:33:46 +0000 (0:00:00.087) 0:00:32.348 ********* 2026-03-31 04:33:51.748591 | orchestrator | 2026-03-31 04:33:51.748604 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-31 04:33:51.748616 | orchestrator | Tuesday 31 March 2026 04:33:46 +0000 (0:00:00.071) 0:00:32.419 ********* 2026-03-31 04:33:51.748628 | orchestrator | 2026-03-31 04:33:51.748641 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-31 04:33:51.748653 | orchestrator | Tuesday 31 March 2026 04:33:47 +0000 (0:00:00.224) 0:00:32.644 ********* 2026-03-31 04:33:51.748665 | orchestrator | 2026-03-31 04:33:51.748678 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-31 04:33:51.748690 | orchestrator | Tuesday 31 March 2026 04:33:47 +0000 (0:00:00.078) 0:00:32.722 ********* 2026-03-31 04:33:51.748700 | orchestrator | 2026-03-31 04:33:51.748711 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-03-31 04:33:51.748722 | orchestrator | 2026-03-31 04:33:51.748732 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-31 04:33:51.748743 | orchestrator | Tuesday 31 March 2026 04:33:47 +0000 (0:00:00.532) 0:00:33.255 ********* 2026-03-31 04:33:51.748754 | orchestrator | included: /ansible/roles/ovn-db/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 04:33:51.748776 | orchestrator | 2026-03-31 04:33:51.748787 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-31 04:33:51.748798 | orchestrator | Tuesday 31 March 2026 04:33:48 +0000 (0:00:00.763) 0:00:34.019 ********* 2026-03-31 04:33:51.748814 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-31 04:33:51.748825 | orchestrator | 2026-03-31 04:33:51.748836 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-03-31 04:33:51.748847 | orchestrator | Tuesday 31 March 2026 04:33:49 +0000 (0:00:00.600) 0:00:34.620 ********* 2026-03-31 04:33:51.748858 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:33:51.748870 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:33:51.748881 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:33:51.748892 | orchestrator | 2026-03-31 04:33:51.748902 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-03-31 04:33:51.748913 | orchestrator | Tuesday 31 March 2026 04:33:50 +0000 (0:00:01.018) 0:00:35.638 ********* 2026-03-31 04:33:51.748924 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:33:51.748934 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:33:51.748945 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:33:51.748956 | orchestrator | 2026-03-31 04:33:51.748967 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-03-31 04:33:51.748978 | orchestrator | Tuesday 31 March 2026 04:33:50 +0000 (0:00:00.357) 0:00:35.996 ********* 2026-03-31 04:33:51.748989 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:33:51.748999 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:33:51.749010 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:33:51.749021 | orchestrator | 2026-03-31 04:33:51.749032 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-03-31 04:33:51.749042 | orchestrator | Tuesday 31 March 2026 04:33:50 +0000 (0:00:00.344) 0:00:36.340 ********* 2026-03-31 04:33:51.749053 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:33:51.749064 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:33:51.749074 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:33:51.749085 | orchestrator | 2026-03-31 04:33:51.749096 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-03-31 04:33:51.749107 | orchestrator | Tuesday 31 March 2026 04:33:51 +0000 (0:00:00.348) 0:00:36.689 ********* 2026-03-31 04:33:51.749117 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:33:51.749128 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:33:51.749139 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:33:51.749150 | orchestrator | 2026-03-31 04:33:51.749160 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-03-31 04:33:51.749179 | orchestrator | Tuesday 31 March 2026 04:33:51 +0000 (0:00:00.601) 0:00:37.290 ********* 2026-03-31 04:34:04.479936 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:34:04.480054 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:34:04.480070 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:34:04.480082 | orchestrator | 2026-03-31 04:34:04.480096 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-03-31 04:34:04.480111 | orchestrator | Tuesday 31 March 2026 04:33:52 +0000 (0:00:00.351) 0:00:37.641 ********* 2026-03-31 04:34:04.480132 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:34:04.480153 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:34:04.480171 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:34:04.480190 | orchestrator | 2026-03-31 04:34:04.480209 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-03-31 04:34:04.480227 | orchestrator | Tuesday 31 March 2026 04:33:52 +0000 (0:00:00.828) 0:00:38.470 ********* 2026-03-31 04:34:04.480246 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:34:04.480263 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:34:04.480283 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:34:04.480304 | orchestrator | 2026-03-31 04:34:04.480324 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-03-31 04:34:04.480378 | orchestrator | Tuesday 31 March 2026 04:33:53 +0000 (0:00:00.604) 0:00:39.075 ********* 2026-03-31 04:34:04.480399 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:34:04.480419 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:34:04.480468 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:34:04.480488 | orchestrator | 2026-03-31 04:34:04.480508 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-03-31 04:34:04.480525 | orchestrator | Tuesday 31 March 2026 04:33:54 +0000 (0:00:00.829) 0:00:39.905 ********* 2026-03-31 04:34:04.480538 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:34:04.480551 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:34:04.480563 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:34:04.480576 | orchestrator | 2026-03-31 04:34:04.480589 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-03-31 04:34:04.480603 | orchestrator | Tuesday 31 March 2026 04:33:54 +0000 (0:00:00.388) 0:00:40.293 ********* 2026-03-31 04:34:04.480615 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:34:04.480628 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:34:04.480641 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:34:04.480655 | orchestrator | 2026-03-31 04:34:04.480668 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-03-31 04:34:04.480681 | orchestrator | Tuesday 31 March 2026 04:33:55 +0000 (0:00:00.364) 0:00:40.658 ********* 2026-03-31 04:34:04.480694 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:34:04.480707 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:34:04.480719 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:34:04.480732 | orchestrator | 2026-03-31 04:34:04.480745 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-03-31 04:34:04.480759 | orchestrator | Tuesday 31 March 2026 04:33:55 +0000 (0:00:00.579) 0:00:41.237 ********* 2026-03-31 04:34:04.480777 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:34:04.480795 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:34:04.480814 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:34:04.480829 | orchestrator | 2026-03-31 04:34:04.480847 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-03-31 04:34:04.480866 | orchestrator | Tuesday 31 March 2026 04:33:56 +0000 (0:00:00.760) 0:00:41.997 ********* 2026-03-31 04:34:04.480883 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:34:04.480901 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:34:04.480918 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:34:04.480936 | orchestrator | 2026-03-31 04:34:04.480953 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-03-31 04:34:04.480973 | orchestrator | Tuesday 31 March 2026 04:33:56 +0000 (0:00:00.342) 0:00:42.340 ********* 2026-03-31 04:34:04.480993 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:34:04.481012 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:34:04.481032 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:34:04.481044 | orchestrator | 2026-03-31 04:34:04.481055 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-03-31 04:34:04.481083 | orchestrator | Tuesday 31 March 2026 04:33:57 +0000 (0:00:00.812) 0:00:43.152 ********* 2026-03-31 04:34:04.481095 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:34:04.481105 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:34:04.481116 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:34:04.481127 | orchestrator | 2026-03-31 04:34:04.481138 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-03-31 04:34:04.481149 | orchestrator | Tuesday 31 March 2026 04:33:58 +0000 (0:00:00.598) 0:00:43.751 ********* 2026-03-31 04:34:04.481160 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:34:04.481171 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:34:04.481182 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:34:04.481193 | orchestrator | 2026-03-31 04:34:04.481204 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-31 04:34:04.481214 | orchestrator | Tuesday 31 March 2026 04:33:58 +0000 (0:00:00.375) 0:00:44.126 ********* 2026-03-31 04:34:04.481237 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:34:04.481248 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:34:04.481259 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:34:04.481270 | orchestrator | 2026-03-31 04:34:04.481281 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-31 04:34:04.481292 | orchestrator | Tuesday 31 March 2026 04:33:58 +0000 (0:00:00.349) 0:00:44.476 ********* 2026-03-31 04:34:04.481305 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:34:04.481341 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:34:04.481354 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:34:04.481367 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:34:04.481381 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:34:04.481393 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:34:04.481404 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:34:04.481416 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:34:04.481427 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:34:04.481472 | orchestrator | 2026-03-31 04:34:04.481484 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-31 04:34:04.481496 | orchestrator | Tuesday 31 March 2026 04:34:00 +0000 (0:00:01.715) 0:00:46.191 ********* 2026-03-31 04:34:04.481508 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:34:04.481519 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:34:04.481539 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:34:17.081867 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:34:17.081987 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:34:17.082003 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:34:17.082015 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:34:17.082136 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:34:17.082172 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:34:17.082205 | orchestrator | 2026-03-31 04:34:17.082219 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-31 04:34:17.082232 | orchestrator | Tuesday 31 March 2026 04:34:04 +0000 (0:00:03.825) 0:00:50.016 ********* 2026-03-31 04:34:17.082243 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:34:17.082256 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:34:17.082267 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:34:17.082299 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:34:17.082311 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:34:17.082323 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:34:17.082334 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:34:17.082345 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:34:17.082357 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-31 04:34:17.082376 | orchestrator | 2026-03-31 04:34:17.082387 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-31 04:34:17.082398 | orchestrator | Tuesday 31 March 2026 04:34:08 +0000 (0:00:03.674) 0:00:53.691 ********* 2026-03-31 04:34:17.082410 | orchestrator | 2026-03-31 04:34:17.082428 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-31 04:34:17.082441 | orchestrator | Tuesday 31 March 2026 04:34:08 +0000 (0:00:00.072) 0:00:53.764 ********* 2026-03-31 04:34:17.082454 | orchestrator | 2026-03-31 04:34:17.082504 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-31 04:34:17.082517 | orchestrator | Tuesday 31 March 2026 04:34:08 +0000 (0:00:00.091) 0:00:53.855 ********* 2026-03-31 04:34:17.082530 | orchestrator | 2026-03-31 04:34:17.082543 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-31 04:34:17.082555 | orchestrator | Tuesday 31 March 2026 04:34:08 +0000 (0:00:00.079) 0:00:53.934 ********* 2026-03-31 04:34:17.082568 | orchestrator | Pausing for 5 seconds 2026-03-31 04:34:17.082583 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:34:17.082596 | orchestrator | 2026-03-31 04:34:17.082609 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-31 04:34:17.082622 | orchestrator | Tuesday 31 March 2026 04:34:13 +0000 (0:00:05.191) 0:00:59.126 ********* 2026-03-31 04:34:17.082634 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:34:17.082647 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:34:17.082659 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:34:17.082672 | orchestrator | 2026-03-31 04:34:17.082685 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-31 04:34:17.082697 | orchestrator | Tuesday 31 March 2026 04:34:14 +0000 (0:00:01.070) 0:01:00.196 ********* 2026-03-31 04:34:17.082710 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:34:17.082724 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:34:17.082737 | orchestrator | changed: [testbed-node-0] 2026-03-31 04:34:17.082750 | orchestrator | 2026-03-31 04:34:17.082762 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-31 04:34:17.082775 | orchestrator | Tuesday 31 March 2026 04:34:15 +0000 (0:00:00.651) 0:01:00.848 ********* 2026-03-31 04:34:17.082788 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:34:17.082799 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:34:17.082810 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:34:17.082820 | orchestrator | 2026-03-31 04:34:17.082831 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-31 04:34:17.082842 | orchestrator | Tuesday 31 March 2026 04:34:16 +0000 (0:00:00.889) 0:01:01.737 ********* 2026-03-31 04:34:17.082853 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:34:17.082864 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:34:17.082875 | orchestrator | changed: [testbed-node-0] 2026-03-31 04:34:17.082885 | orchestrator | 2026-03-31 04:34:17.082896 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-31 04:34:17.082914 | orchestrator | Tuesday 31 March 2026 04:34:17 +0000 (0:00:00.868) 0:01:02.606 ********* 2026-03-31 04:34:19.217863 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:34:19.217956 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:34:19.217967 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:34:19.217976 | orchestrator | 2026-03-31 04:34:19.217985 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-31 04:34:19.217995 | orchestrator | Tuesday 31 March 2026 04:34:17 +0000 (0:00:00.814) 0:01:03.420 ********* 2026-03-31 04:34:19.218003 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:34:19.218011 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:34:19.218069 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:34:19.218104 | orchestrator | 2026-03-31 04:34:19.218110 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 04:34:19.218116 | orchestrator | testbed-node-0 : ok=35  changed=2  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-31 04:34:19.218122 | orchestrator | testbed-node-1 : ok=32  changed=0 unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-31 04:34:19.218127 | orchestrator | testbed-node-2 : ok=32  changed=0 unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-31 04:34:19.218133 | orchestrator | testbed-node-3 : ok=10  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 04:34:19.218138 | orchestrator | testbed-node-4 : ok=10  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 04:34:19.218142 | orchestrator | testbed-node-5 : ok=10  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-31 04:34:19.218147 | orchestrator | 2026-03-31 04:34:19.218151 | orchestrator | 2026-03-31 04:34:19.218156 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 04:34:19.218161 | orchestrator | Tuesday 31 March 2026 04:34:18 +0000 (0:00:00.918) 0:01:04.339 ********* 2026-03-31 04:34:19.218165 | orchestrator | =============================================================================== 2026-03-31 04:34:19.218170 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.82s 2026-03-31 04:34:19.218174 | orchestrator | ovn-db : Wait for leader election --------------------------------------- 5.19s 2026-03-31 04:34:19.218179 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.83s 2026-03-31 04:34:19.218184 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.67s 2026-03-31 04:34:19.218188 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.76s 2026-03-31 04:34:19.218193 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.76s 2026-03-31 04:34:19.218197 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.72s 2026-03-31 04:34:19.218202 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.68s 2026-03-31 04:34:19.218206 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.54s 2026-03-31 04:34:19.218223 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.33s 2026-03-31 04:34:19.218228 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.27s 2026-03-31 04:34:19.218232 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.23s 2026-03-31 04:34:19.218237 | orchestrator | ovn-db : Get OVN_Northbound cluster leader ------------------------------ 1.07s 2026-03-31 04:34:19.218241 | orchestrator | ovn-controller : Flush handlers ----------------------------------------- 1.07s 2026-03-31 04:34:19.218246 | orchestrator | ovn-db : Checking for any existing OVN DB container volumes ------------- 1.02s 2026-03-31 04:34:19.218251 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 0.92s 2026-03-31 04:34:19.218255 | orchestrator | ovn-db : Get OVN_Southbound cluster leader ------------------------------ 0.89s 2026-03-31 04:34:19.218260 | orchestrator | ovn-db : Configure OVN SB connection settings --------------------------- 0.87s 2026-03-31 04:34:19.218264 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.85s 2026-03-31 04:34:19.218269 | orchestrator | ovn-db : Get OVN NB database information -------------------------------- 0.83s 2026-03-31 04:34:19.526128 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-03-31 04:34:19.526222 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-31 04:34:19.526237 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh 2026-03-31 04:34:19.533033 | orchestrator | + set -e 2026-03-31 04:34:19.533068 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-31 04:34:19.533105 | orchestrator | ++ export INTERACTIVE=false 2026-03-31 04:34:19.533116 | orchestrator | ++ INTERACTIVE=false 2026-03-31 04:34:19.533126 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-31 04:34:19.533136 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-31 04:34:19.533146 | orchestrator | + osism apply ceph-rolling_update -e ireallymeanit=yes 2026-03-31 04:34:21.599067 | orchestrator | 2026-03-31 04:34:21 | INFO  | Task ca6f4488-f2ef-4cee-b64a-20a9f76a1160 (ceph-rolling_update) was prepared for execution. 2026-03-31 04:34:21.599182 | orchestrator | 2026-03-31 04:34:21 | INFO  | It takes a moment until task ca6f4488-f2ef-4cee-b64a-20a9f76a1160 (ceph-rolling_update) has been started and output is visible here. 2026-03-31 04:35:17.275164 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-31 04:35:17.275303 | orchestrator | 2.16.14 2026-03-31 04:35:17.275323 | orchestrator | 2026-03-31 04:35:17.275335 | orchestrator | PLAY [Confirm whether user really meant to upgrade the cluster] **************** 2026-03-31 04:35:17.275348 | orchestrator | 2026-03-31 04:35:17.275359 | orchestrator | TASK [Exit playbook, if user did not mean to upgrade cluster] ****************** 2026-03-31 04:35:17.275371 | orchestrator | Tuesday 31 March 2026 04:34:27 +0000 (0:00:00.203) 0:00:00.203 ********* 2026-03-31 04:35:17.275382 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors 2026-03-31 04:35:17.275394 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: nfss 2026-03-31 04:35:17.275405 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: clients 2026-03-31 04:35:17.275469 | orchestrator | skipping: [localhost] 2026-03-31 04:35:17.275491 | orchestrator | 2026-03-31 04:35:17.275511 | orchestrator | PLAY [Gather facts and check the init system] ********************************** 2026-03-31 04:35:17.275529 | orchestrator | 2026-03-31 04:35:17.275547 | orchestrator | TASK [Gather facts on all Ceph hosts for following reference] ****************** 2026-03-31 04:35:17.275560 | orchestrator | Tuesday 31 March 2026 04:34:27 +0000 (0:00:00.259) 0:00:00.463 ********* 2026-03-31 04:35:17.275571 | orchestrator | ok: [testbed-node-0] => { 2026-03-31 04:35:17.275604 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-31 04:35:17.275617 | orchestrator | } 2026-03-31 04:35:17.275628 | orchestrator | ok: [testbed-node-1] => { 2026-03-31 04:35:17.275639 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-31 04:35:17.275650 | orchestrator | } 2026-03-31 04:35:17.275663 | orchestrator | ok: [testbed-node-2] => { 2026-03-31 04:35:17.275676 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-31 04:35:17.275689 | orchestrator | } 2026-03-31 04:35:17.275701 | orchestrator | ok: [testbed-node-3] => { 2026-03-31 04:35:17.275714 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-31 04:35:17.275727 | orchestrator | } 2026-03-31 04:35:17.275739 | orchestrator | ok: [testbed-node-4] => { 2026-03-31 04:35:17.275752 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-31 04:35:17.275764 | orchestrator | } 2026-03-31 04:35:17.275776 | orchestrator | ok: [testbed-node-5] => { 2026-03-31 04:35:17.275789 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-31 04:35:17.275802 | orchestrator | } 2026-03-31 04:35:17.275814 | orchestrator | ok: [testbed-manager] => { 2026-03-31 04:35:17.275827 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-31 04:35:17.275840 | orchestrator | } 2026-03-31 04:35:17.275852 | orchestrator | 2026-03-31 04:35:17.275865 | orchestrator | TASK [Gather facts] ************************************************************ 2026-03-31 04:35:17.275878 | orchestrator | Tuesday 31 March 2026 04:34:29 +0000 (0:00:01.495) 0:00:01.959 ********* 2026-03-31 04:35:17.275891 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:35:17.275903 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:35:17.275916 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:35:17.275929 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:35:17.275968 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:35:17.275982 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:35:17.275995 | orchestrator | ok: [testbed-manager] 2026-03-31 04:35:17.276008 | orchestrator | 2026-03-31 04:35:17.276020 | orchestrator | TASK [Gather and delegate facts] *********************************************** 2026-03-31 04:35:17.276034 | orchestrator | Tuesday 31 March 2026 04:34:34 +0000 (0:00:04.892) 0:00:06.851 ********* 2026-03-31 04:35:17.276046 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-31 04:35:17.276057 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:35:17.276083 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:35:17.276095 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-31 04:35:17.276106 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-31 04:35:17.276117 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-31 04:35:17.276128 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-31 04:35:17.276139 | orchestrator | 2026-03-31 04:35:17.276150 | orchestrator | TASK [Set_fact rolling_update] ************************************************* 2026-03-31 04:35:17.276161 | orchestrator | Tuesday 31 March 2026 04:35:03 +0000 (0:00:29.311) 0:00:36.163 ********* 2026-03-31 04:35:17.276172 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:35:17.276183 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:35:17.276194 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:35:17.276205 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:35:17.276216 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:35:17.276227 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:35:17.276238 | orchestrator | ok: [testbed-manager] 2026-03-31 04:35:17.276249 | orchestrator | 2026-03-31 04:35:17.276261 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-31 04:35:17.276272 | orchestrator | Tuesday 31 March 2026 04:35:04 +0000 (0:00:00.910) 0:00:37.074 ********* 2026-03-31 04:35:17.276284 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-03-31 04:35:17.276298 | orchestrator | 2026-03-31 04:35:17.276309 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-31 04:35:17.276320 | orchestrator | Tuesday 31 March 2026 04:35:05 +0000 (0:00:01.468) 0:00:38.542 ********* 2026-03-31 04:35:17.276331 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:35:17.276342 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:35:17.276353 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:35:17.276364 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:35:17.276375 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:35:17.276386 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:35:17.276397 | orchestrator | ok: [testbed-manager] 2026-03-31 04:35:17.276408 | orchestrator | 2026-03-31 04:35:17.276436 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-31 04:35:17.276448 | orchestrator | Tuesday 31 March 2026 04:35:07 +0000 (0:00:01.288) 0:00:39.831 ********* 2026-03-31 04:35:17.276459 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:35:17.276470 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:35:17.276481 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:35:17.276492 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:35:17.276503 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:35:17.276514 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:35:17.276525 | orchestrator | ok: [testbed-manager] 2026-03-31 04:35:17.276536 | orchestrator | 2026-03-31 04:35:17.276547 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-31 04:35:17.276558 | orchestrator | Tuesday 31 March 2026 04:35:07 +0000 (0:00:00.754) 0:00:40.585 ********* 2026-03-31 04:35:17.276569 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:35:17.276579 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:35:17.276630 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:35:17.276642 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:35:17.276653 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:35:17.276664 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:35:17.276675 | orchestrator | ok: [testbed-manager] 2026-03-31 04:35:17.276685 | orchestrator | 2026-03-31 04:35:17.276697 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-31 04:35:17.276708 | orchestrator | Tuesday 31 March 2026 04:35:09 +0000 (0:00:01.324) 0:00:41.910 ********* 2026-03-31 04:35:17.276719 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:35:17.276730 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:35:17.276741 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:35:17.276752 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:35:17.276763 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:35:17.276774 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:35:17.276785 | orchestrator | ok: [testbed-manager] 2026-03-31 04:35:17.276796 | orchestrator | 2026-03-31 04:35:17.276807 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-31 04:35:17.276818 | orchestrator | Tuesday 31 March 2026 04:35:09 +0000 (0:00:00.626) 0:00:42.536 ********* 2026-03-31 04:35:17.276829 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:35:17.276840 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:35:17.276851 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:35:17.276861 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:35:17.276872 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:35:17.276883 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:35:17.276894 | orchestrator | ok: [testbed-manager] 2026-03-31 04:35:17.276905 | orchestrator | 2026-03-31 04:35:17.276916 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-31 04:35:17.276927 | orchestrator | Tuesday 31 March 2026 04:35:10 +0000 (0:00:00.804) 0:00:43.340 ********* 2026-03-31 04:35:17.276938 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:35:17.276949 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:35:17.276961 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:35:17.276972 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:35:17.276982 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:35:17.276993 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:35:17.277004 | orchestrator | ok: [testbed-manager] 2026-03-31 04:35:17.277015 | orchestrator | 2026-03-31 04:35:17.277026 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-31 04:35:17.277038 | orchestrator | Tuesday 31 March 2026 04:35:11 +0000 (0:00:00.646) 0:00:43.986 ********* 2026-03-31 04:35:17.277049 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:35:17.277060 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:35:17.277071 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:35:17.277082 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:35:17.277093 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:35:17.277104 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:35:17.277115 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:35:17.277126 | orchestrator | 2026-03-31 04:35:17.277137 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-31 04:35:17.277148 | orchestrator | Tuesday 31 March 2026 04:35:12 +0000 (0:00:00.805) 0:00:44.792 ********* 2026-03-31 04:35:17.277159 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:35:17.277175 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:35:17.277187 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:35:17.277198 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:35:17.277209 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:35:17.277220 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:35:17.277231 | orchestrator | ok: [testbed-manager] 2026-03-31 04:35:17.277242 | orchestrator | 2026-03-31 04:35:17.277253 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-31 04:35:17.277264 | orchestrator | Tuesday 31 March 2026 04:35:12 +0000 (0:00:00.663) 0:00:45.456 ********* 2026-03-31 04:35:17.277275 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-31 04:35:17.277295 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:35:17.277306 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:35:17.277317 | orchestrator | 2026-03-31 04:35:17.277328 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-31 04:35:17.277339 | orchestrator | Tuesday 31 March 2026 04:35:13 +0000 (0:00:00.741) 0:00:46.197 ********* 2026-03-31 04:35:17.277350 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:35:17.277361 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:35:17.277373 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:35:17.277383 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:35:17.277394 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:35:17.277405 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:35:17.277416 | orchestrator | ok: [testbed-manager] 2026-03-31 04:35:17.277427 | orchestrator | 2026-03-31 04:35:17.277444 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-31 04:35:17.277461 | orchestrator | Tuesday 31 March 2026 04:35:14 +0000 (0:00:01.138) 0:00:47.335 ********* 2026-03-31 04:35:17.277480 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-31 04:35:17.277500 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:35:17.277519 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:35:17.277538 | orchestrator | 2026-03-31 04:35:17.277557 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-31 04:35:17.277576 | orchestrator | Tuesday 31 March 2026 04:35:16 +0000 (0:00:02.191) 0:00:49.526 ********* 2026-03-31 04:35:17.277682 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-31 04:35:26.142541 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-31 04:35:26.142699 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-31 04:35:26.142715 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:35:26.142728 | orchestrator | 2026-03-31 04:35:26.142741 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-31 04:35:26.142754 | orchestrator | Tuesday 31 March 2026 04:35:17 +0000 (0:00:00.418) 0:00:49.945 ********* 2026-03-31 04:35:26.142767 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-31 04:35:26.142781 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-31 04:35:26.142793 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-31 04:35:26.142805 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:35:26.142816 | orchestrator | 2026-03-31 04:35:26.142827 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-31 04:35:26.142839 | orchestrator | Tuesday 31 March 2026 04:35:18 +0000 (0:00:00.860) 0:00:50.805 ********* 2026-03-31 04:35:26.142869 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:26.142884 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:26.142933 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:26.142947 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:35:26.142958 | orchestrator | 2026-03-31 04:35:26.142970 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-31 04:35:26.142981 | orchestrator | Tuesday 31 March 2026 04:35:18 +0000 (0:00:00.182) 0:00:50.988 ********* 2026-03-31 04:35:26.142995 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '80cb11f76dbe', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-31 04:35:15.284972', 'end': '2026-03-31 04:35:15.337428', 'delta': '0:00:00.052456', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['80cb11f76dbe'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-31 04:35:26.143027 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '1ea1d727f3e0', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-31 04:35:15.865872', 'end': '2026-03-31 04:35:15.919662', 'delta': '0:00:00.053790', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1ea1d727f3e0'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-31 04:35:26.143040 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'df3f30930c20', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-31 04:35:16.650602', 'end': '2026-03-31 04:35:16.699234', 'delta': '0:00:00.048632', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['df3f30930c20'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-31 04:35:26.143052 | orchestrator | 2026-03-31 04:35:26.143064 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-31 04:35:26.143077 | orchestrator | Tuesday 31 March 2026 04:35:18 +0000 (0:00:00.197) 0:00:51.185 ********* 2026-03-31 04:35:26.143090 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:35:26.143104 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:35:26.143116 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:35:26.143129 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:35:26.143141 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:35:26.143154 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:35:26.143166 | orchestrator | ok: [testbed-manager] 2026-03-31 04:35:26.143179 | orchestrator | 2026-03-31 04:35:26.143202 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-31 04:35:26.143215 | orchestrator | Tuesday 31 March 2026 04:35:19 +0000 (0:00:01.170) 0:00:52.356 ********* 2026-03-31 04:35:26.143228 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:35:26.143240 | orchestrator | 2026-03-31 04:35:26.143253 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-31 04:35:26.143265 | orchestrator | Tuesday 31 March 2026 04:35:19 +0000 (0:00:00.243) 0:00:52.600 ********* 2026-03-31 04:35:26.143278 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:35:26.143291 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:35:26.143304 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:35:26.143316 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:35:26.143329 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:35:26.143341 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:35:26.143353 | orchestrator | ok: [testbed-manager] 2026-03-31 04:35:26.143366 | orchestrator | 2026-03-31 04:35:26.143378 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-31 04:35:26.143391 | orchestrator | Tuesday 31 March 2026 04:35:20 +0000 (0:00:00.961) 0:00:53.561 ********* 2026-03-31 04:35:26.143404 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:35:26.143417 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-03-31 04:35:26.143429 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-31 04:35:26.143441 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-31 04:35:26.143452 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-31 04:35:26.143463 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-31 04:35:26.143474 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-31 04:35:26.143485 | orchestrator | 2026-03-31 04:35:26.143502 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-31 04:35:26.143513 | orchestrator | Tuesday 31 March 2026 04:35:23 +0000 (0:00:02.274) 0:00:55.836 ********* 2026-03-31 04:35:26.143524 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:35:26.143536 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:35:26.143547 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:35:26.143558 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:35:26.143568 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:35:26.143579 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:35:26.143591 | orchestrator | ok: [testbed-manager] 2026-03-31 04:35:26.143659 | orchestrator | 2026-03-31 04:35:26.143683 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-31 04:35:26.143700 | orchestrator | Tuesday 31 March 2026 04:35:24 +0000 (0:00:00.945) 0:00:56.781 ********* 2026-03-31 04:35:26.143718 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:35:26.143730 | orchestrator | 2026-03-31 04:35:26.143741 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-31 04:35:26.143752 | orchestrator | Tuesday 31 March 2026 04:35:24 +0000 (0:00:00.122) 0:00:56.904 ********* 2026-03-31 04:35:26.143763 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:35:26.143774 | orchestrator | 2026-03-31 04:35:26.143785 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-31 04:35:26.143797 | orchestrator | Tuesday 31 March 2026 04:35:24 +0000 (0:00:00.224) 0:00:57.128 ********* 2026-03-31 04:35:26.143808 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:35:26.143819 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:35:26.143830 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:35:26.143841 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:35:26.143852 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:35:26.143862 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:35:26.143873 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:35:26.143884 | orchestrator | 2026-03-31 04:35:26.143896 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-31 04:35:26.143907 | orchestrator | Tuesday 31 March 2026 04:35:25 +0000 (0:00:00.691) 0:00:57.819 ********* 2026-03-31 04:35:26.143926 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:35:26.143938 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:35:26.143949 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:35:26.143960 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:35:26.143971 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:35:26.143982 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:35:26.144002 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:35:30.838114 | orchestrator | 2026-03-31 04:35:30.838226 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-31 04:35:30.838242 | orchestrator | Tuesday 31 March 2026 04:35:26 +0000 (0:00:00.995) 0:00:58.815 ********* 2026-03-31 04:35:30.838254 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:35:30.838266 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:35:30.838277 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:35:30.838287 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:35:30.838298 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:35:30.838309 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:35:30.838320 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:35:30.838331 | orchestrator | 2026-03-31 04:35:30.838342 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-31 04:35:30.838353 | orchestrator | Tuesday 31 March 2026 04:35:26 +0000 (0:00:00.690) 0:00:59.505 ********* 2026-03-31 04:35:30.838364 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:35:30.838375 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:35:30.838386 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:35:30.838396 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:35:30.838407 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:35:30.838418 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:35:30.838428 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:35:30.838439 | orchestrator | 2026-03-31 04:35:30.838450 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-31 04:35:30.838461 | orchestrator | Tuesday 31 March 2026 04:35:27 +0000 (0:00:00.933) 0:01:00.438 ********* 2026-03-31 04:35:30.838472 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:35:30.838483 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:35:30.838493 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:35:30.838504 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:35:30.838515 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:35:30.838525 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:35:30.838536 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:35:30.838547 | orchestrator | 2026-03-31 04:35:30.838558 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-31 04:35:30.838568 | orchestrator | Tuesday 31 March 2026 04:35:28 +0000 (0:00:00.936) 0:01:01.375 ********* 2026-03-31 04:35:30.838580 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:35:30.838591 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:35:30.838604 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:35:30.838642 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:35:30.838654 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:35:30.838666 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:35:30.838679 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:35:30.838691 | orchestrator | 2026-03-31 04:35:30.838704 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-31 04:35:30.838717 | orchestrator | Tuesday 31 March 2026 04:35:29 +0000 (0:00:00.789) 0:01:02.165 ********* 2026-03-31 04:35:30.838730 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:35:30.838743 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:35:30.838755 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:35:30.838768 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:35:30.838780 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:35:30.838793 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:35:30.838806 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:35:30.838841 | orchestrator | 2026-03-31 04:35:30.838855 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-31 04:35:30.838867 | orchestrator | Tuesday 31 March 2026 04:35:30 +0000 (0:00:01.021) 0:01:03.186 ********* 2026-03-31 04:35:30.838896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:30.838913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:30.838926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:30.838961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-31-01-38-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-31 04:35:30.838976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:30.838987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:30.838999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:30.839021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '61782125', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part16', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part14', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part15', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part1', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-31 04:35:30.839044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:30.839064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:30.987603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:30.987781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:30.987796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:30.987811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-31-01-38-51-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-31 04:35:30.987862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:30.987875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:30.987887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:30.987921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e', 'scsi-SQEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '47a85f4c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part16', 'scsi-SQEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part14', 'scsi-SQEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part15', 'scsi-SQEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part1', 'scsi-SQEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-31 04:35:30.987937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:30.987955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:30.987968 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:35:30.987987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:30.987999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:30.988010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:30.988022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-31-01-38-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-31 04:35:30.988042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:31.163858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:31.163955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:31.164011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4', 'scsi-SQEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '49050c5a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part16', 'scsi-SQEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part14', 'scsi-SQEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part15', 'scsi-SQEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part1', 'scsi-SQEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-31 04:35:31.164029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:31.164042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:31.164054 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:35:31.164084 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:31.164098 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--67174221--9040--517a--ae84--daf8ebd704d7-osd--block--67174221--9040--517a--ae84--daf8ebd704d7', 'dm-uuid-LVM-KejqHBdnFtLSyyC9R84nyz1yANxrpRIXzilsodjHoTjpW17LoAebYG18loNV682y'], 'uuids': ['e0243936-4e5c-4d79-8eb8-83df85650a2f'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c466d3ef', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['zilsod-jHoT-jpW1-7LoA-ebYG-18lo-NV682y']}})  2026-03-31 04:35:31.164119 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a878a648-90f8-45a8-8930-74e801ae2e4e', 'scsi-SQEMU_QEMU_HARDDISK_a878a648-90f8-45a8-8930-74e801ae2e4e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a878a648', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-31 04:35:31.164137 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-lFSq2g-b3FP-rBDh-oytj-DsQd-47zI-8ZR1ba', 'scsi-0QEMU_QEMU_HARDDISK_820fa545-b298-47e1-b072-447ef233e5c9', 'scsi-SQEMU_QEMU_HARDDISK_820fa545-b298-47e1-b072-447ef233e5c9'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '820fa545', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--dad98f55--09f4--5a2b--a5c7--aafce2660c53-osd--block--dad98f55--09f4--5a2b--a5c7--aafce2660c53']}})  2026-03-31 04:35:31.164150 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:31.164161 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:31.164173 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-31-01-38-49-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-31 04:35:31.164194 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:31.277345 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ttbUQt-J3i2-5YBf-d39y-c024-Mn1f-tAcrtm', 'dm-uuid-CRYPT-LUKS2-c1688bff06c1489bb542bf83ea59d0b8-ttbUQt-J3i2-5YBf-d39y-c024-Mn1f-tAcrtm'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-31 04:35:31.277472 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:35:31.277490 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:31.277506 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--dad98f55--09f4--5a2b--a5c7--aafce2660c53-osd--block--dad98f55--09f4--5a2b--a5c7--aafce2660c53', 'dm-uuid-LVM-3PGokd0XE9nIVZhiheUbxNcBNNscsDrxttbUQtJ3i25YBfd39yc024Mn1ftAcrtm'], 'uuids': ['c1688bff-06c1-489b-b542-bf83ea59d0b8'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '820fa545', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['ttbUQt-J3i2-5YBf-d39y-c024-Mn1f-tAcrtm']}})  2026-03-31 04:35:31.277533 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ysmeMC-hqe7-I7iJ-JTkz-gYYz-B5UB-UbMPzu', 'scsi-0QEMU_QEMU_HARDDISK_c466d3ef-6614-47a1-86d1-ef83336ce84c', 'scsi-SQEMU_QEMU_HARDDISK_c466d3ef-6614-47a1-86d1-ef83336ce84c'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c466d3ef', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--67174221--9040--517a--ae84--daf8ebd704d7-osd--block--67174221--9040--517a--ae84--daf8ebd704d7']}})  2026-03-31 04:35:31.277547 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:31.277582 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '53e77e6d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part16', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part14', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part15', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part1', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-31 04:35:31.277693 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:31.277718 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:31.277741 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-zilsod-jHoT-jpW1-7LoA-ebYG-18lo-NV682y', 'dm-uuid-CRYPT-LUKS2-e02439364e5c4d798eb883df85650a2f-zilsod-jHoT-jpW1-7LoA-ebYG-18lo-NV682y'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-31 04:35:31.277755 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:31.277766 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--da0b55d5--13d5--528b--aee2--5667f342587c-osd--block--da0b55d5--13d5--528b--aee2--5667f342587c', 'dm-uuid-LVM-voIvMScBNf0nn1UqP6J3mrL57Feo8hpsEfbBIXBLL2lbnvB5fpXdf3Vs7Oc4nA8j'], 'uuids': ['26974dbf-f0a7-4ca8-8b18-f9eb0862be76'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'aca90cda', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['EfbBIX-BLL2-lbnv-B5fp-Xdf3-Vs7O-c4nA8j']}})  2026-03-31 04:35:31.277787 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a64e844-a251-4ee7-a817-d55da64d6351', 'scsi-SQEMU_QEMU_HARDDISK_5a64e844-a251-4ee7-a817-d55da64d6351'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5a64e844', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-31 04:35:31.397206 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-jppFpT-6287-H5UX-wadw-idvL-aDwi-H3fsQH', 'scsi-0QEMU_QEMU_HARDDISK_627ac388-afe2-405e-bfb6-93a96eeb5247', 'scsi-SQEMU_QEMU_HARDDISK_627ac388-afe2-405e-bfb6-93a96eeb5247'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '627ac388', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb-osd--block--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb']}})  2026-03-31 04:35:31.397286 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:31.397296 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:31.397303 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:35:31.397323 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-31-01-38-47-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-31 04:35:31.397330 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:31.397336 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-jN9Ywl-XbnL-hNii-unic-nne9-TiGA-xFnCN2', 'dm-uuid-CRYPT-LUKS2-c911a2b9ffbe4994aafa7327c1153c91-jN9Ywl-XbnL-hNii-unic-nne9-TiGA-xFnCN2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-31 04:35:31.397342 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:31.397377 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb-osd--block--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb', 'dm-uuid-LVM-RwD1SDPPywNrcOLsCdJUWJCkPqisEw7IjN9YwlXbnLhNiiunicnne9TiGAxFnCN2'], 'uuids': ['c911a2b9-ffbe-4994-aafa-7327c1153c91'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '627ac388', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['jN9Ywl-XbnL-hNii-unic-nne9-TiGA-xFnCN2']}})  2026-03-31 04:35:31.397384 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-pfZnnD-Ultt-g92I-R3gj-okuR-Ezub-rBAf3f', 'scsi-0QEMU_QEMU_HARDDISK_aca90cda-810a-4a3a-a8a4-a9246b552814', 'scsi-SQEMU_QEMU_HARDDISK_aca90cda-810a-4a3a-a8a4-a9246b552814'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'aca90cda', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--da0b55d5--13d5--528b--aee2--5667f342587c-osd--block--da0b55d5--13d5--528b--aee2--5667f342587c']}})  2026-03-31 04:35:31.397390 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:31.397399 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:31.397411 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9459331e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part16', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part14', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part15', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part1', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-31 04:35:31.568229 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--185c377e--da3e--5428--98db--747be321d2f9-osd--block--185c377e--da3e--5428--98db--747be321d2f9', 'dm-uuid-LVM-x16wR0JSkJwOUat6KB2RjtOnd6k2ruBp3Senp6or7C3BHvrbv8KuFHdSdmwvdICC'], 'uuids': ['4a48fb33-b599-4c4d-a815-d018d343a3ff'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0036be6c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['3Senp6-or7C-3BHv-rbv8-KuFH-dSdm-wvdICC']}})  2026-03-31 04:35:31.568351 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:31.568401 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1382055-b12a-4a0d-90b0-6b0bf5b2002d', 'scsi-SQEMU_QEMU_HARDDISK_d1382055-b12a-4a0d-90b0-6b0bf5b2002d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd1382055', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-31 04:35:31.568423 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:31.568445 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-bwm83I-k31i-pwme-XT9I-9Z0g-1hP0-CwgXOd', 'scsi-0QEMU_QEMU_HARDDISK_cee620fc-9fd6-4c5e-b237-9b955e0088ae', 'scsi-SQEMU_QEMU_HARDDISK_cee620fc-9fd6-4c5e-b237-9b955e0088ae'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'cee620fc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--07ced279--a583--5107--8220--95f80fc10ac7-osd--block--07ced279--a583--5107--8220--95f80fc10ac7']}})  2026-03-31 04:35:31.568467 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:31.568510 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-EfbBIX-BLL2-lbnv-B5fp-Xdf3-Vs7O-c4nA8j', 'dm-uuid-CRYPT-LUKS2-26974dbff0a74ca88b18f9eb0862be76-EfbBIX-BLL2-lbnv-B5fp-Xdf3-Vs7O-c4nA8j'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-31 04:35:31.568542 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:31.568555 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-31-01-38-44-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-31 04:35:31.568567 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:31.568584 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-yKTWsV-enR4-4CqY-2klB-eRO2-fR5A-XJ6GI1', 'dm-uuid-CRYPT-LUKS2-74b5eafc2cf149539043240c66b113f2-yKTWsV-enR4-4CqY-2klB-eRO2-fR5A-XJ6GI1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-31 04:35:31.568597 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:35:31.568611 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:31.568694 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--07ced279--a583--5107--8220--95f80fc10ac7-osd--block--07ced279--a583--5107--8220--95f80fc10ac7', 'dm-uuid-LVM-4Lb9QdMZv1ai74sfHiNB7SWQCThlMxSwyKTWsVenR44CqY2klBeRO2fR5AXJ6GI1'], 'uuids': ['74b5eafc-2cf1-4953-9043-240c66b113f2'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'cee620fc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['yKTWsV-enR4-4CqY-2klB-eRO2-fR5A-XJ6GI1']}})  2026-03-31 04:35:31.568716 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-zgTsa4-r5F1-H4rU-9oqC-nOys-qaba-d4ei1Y', 'scsi-0QEMU_QEMU_HARDDISK_0036be6c-41d0-4a1c-804a-c8bed222bda7', 'scsi-SQEMU_QEMU_HARDDISK_0036be6c-41d0-4a1c-804a-c8bed222bda7'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0036be6c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--185c377e--da3e--5428--98db--747be321d2f9-osd--block--185c377e--da3e--5428--98db--747be321d2f9']}})  2026-03-31 04:35:31.568742 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:31.678003 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f91d726b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part16', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part14', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part15', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part1', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-31 04:35:31.678136 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:31.678152 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:31.678181 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:31.678190 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:31.678218 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-3Senp6-or7C-3BHv-rbv8-KuFH-dSdm-wvdICC', 'dm-uuid-CRYPT-LUKS2-4a48fb33b5994c4da815d018d343a3ff-3Senp6-or7C-3BHv-rbv8-KuFH-dSdm-wvdICC'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-31 04:35:31.678229 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:31.678238 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:35:31.678248 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-31-01-39-13-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-31 04:35:31.678263 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:31.678272 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:31.678281 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:31.678305 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_972f9726-ae68-4000-ae51-611d4e82d0e5', 'scsi-SQEMU_QEMU_HARDDISK_972f9726-ae68-4000-ae51-611d4e82d0e5'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '972f9726', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_972f9726-ae68-4000-ae51-611d4e82d0e5-part16', 'scsi-SQEMU_QEMU_HARDDISK_972f9726-ae68-4000-ae51-611d4e82d0e5-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_972f9726-ae68-4000-ae51-611d4e82d0e5-part14', 'scsi-SQEMU_QEMU_HARDDISK_972f9726-ae68-4000-ae51-611d4e82d0e5-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_972f9726-ae68-4000-ae51-611d4e82d0e5-part15', 'scsi-SQEMU_QEMU_HARDDISK_972f9726-ae68-4000-ae51-611d4e82d0e5-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_972f9726-ae68-4000-ae51-611d4e82d0e5-part1', 'scsi-SQEMU_QEMU_HARDDISK_972f9726-ae68-4000-ae51-611d4e82d0e5-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-31 04:35:31.826818 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:31.826931 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:35:31.826946 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:35:31.826957 | orchestrator | 2026-03-31 04:35:31.826967 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-31 04:35:31.826977 | orchestrator | Tuesday 31 March 2026 04:35:31 +0000 (0:00:01.165) 0:01:04.351 ********* 2026-03-31 04:35:31.826989 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:31.827020 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:31.827030 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:31.827041 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-31-01-38-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:31.827068 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:31.827078 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:31.827092 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:31.827112 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '61782125', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part16', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part14', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part15', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part1', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:31.827130 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:31.998968 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:31.999077 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:35:31.999097 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:31.999131 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:31.999144 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:31.999157 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-31-01-38-51-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:31.999171 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:31.999203 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:31.999223 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:31.999259 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e', 'scsi-SQEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '47a85f4c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part16', 'scsi-SQEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part14', 'scsi-SQEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part15', 'scsi-SQEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part1', 'scsi-SQEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:31.999283 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:31.999315 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:32.508526 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:35:32.508607 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:32.508671 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:32.508681 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:32.508690 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-31-01-38-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:32.508700 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:32.508707 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:32.508735 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:32.508752 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4', 'scsi-SQEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '49050c5a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part16', 'scsi-SQEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part14', 'scsi-SQEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part15', 'scsi-SQEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part1', 'scsi-SQEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:32.508762 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:32.508770 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:32.508789 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:32.531870 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--67174221--9040--517a--ae84--daf8ebd704d7-osd--block--67174221--9040--517a--ae84--daf8ebd704d7', 'dm-uuid-LVM-KejqHBdnFtLSyyC9R84nyz1yANxrpRIXzilsodjHoTjpW17LoAebYG18loNV682y'], 'uuids': ['e0243936-4e5c-4d79-8eb8-83df85650a2f'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c466d3ef', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['zilsod-jHoT-jpW1-7LoA-ebYG-18lo-NV682y']}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:32.531949 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a878a648-90f8-45a8-8930-74e801ae2e4e', 'scsi-SQEMU_QEMU_HARDDISK_a878a648-90f8-45a8-8930-74e801ae2e4e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a878a648', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:32.531964 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-lFSq2g-b3FP-rBDh-oytj-DsQd-47zI-8ZR1ba', 'scsi-0QEMU_QEMU_HARDDISK_820fa545-b298-47e1-b072-447ef233e5c9', 'scsi-SQEMU_QEMU_HARDDISK_820fa545-b298-47e1-b072-447ef233e5c9'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '820fa545', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--dad98f55--09f4--5a2b--a5c7--aafce2660c53-osd--block--dad98f55--09f4--5a2b--a5c7--aafce2660c53']}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:32.532027 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:32.532072 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:32.532103 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-31-01-38-49-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:32.532116 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:32.532128 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ttbUQt-J3i2-5YBf-d39y-c024-Mn1f-tAcrtm', 'dm-uuid-CRYPT-LUKS2-c1688bff06c1489bb542bf83ea59d0b8-ttbUQt-J3i2-5YBf-d39y-c024-Mn1f-tAcrtm'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:32.532140 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:32.532152 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--dad98f55--09f4--5a2b--a5c7--aafce2660c53-osd--block--dad98f55--09f4--5a2b--a5c7--aafce2660c53', 'dm-uuid-LVM-3PGokd0XE9nIVZhiheUbxNcBNNscsDrxttbUQtJ3i25YBfd39yc024Mn1ftAcrtm'], 'uuids': ['c1688bff-06c1-489b-b542-bf83ea59d0b8'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '820fa545', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['ttbUQt-J3i2-5YBf-d39y-c024-Mn1f-tAcrtm']}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:32.532185 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ysmeMC-hqe7-I7iJ-JTkz-gYYz-B5UB-UbMPzu', 'scsi-0QEMU_QEMU_HARDDISK_c466d3ef-6614-47a1-86d1-ef83336ce84c', 'scsi-SQEMU_QEMU_HARDDISK_c466d3ef-6614-47a1-86d1-ef83336ce84c'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c466d3ef', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--67174221--9040--517a--ae84--daf8ebd704d7-osd--block--67174221--9040--517a--ae84--daf8ebd704d7']}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:32.897015 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:32.897118 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '53e77e6d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part16', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part14', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part15', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part1', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:32.897178 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:32.897213 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:32.897227 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-zilsod-jHoT-jpW1-7LoA-ebYG-18lo-NV682y', 'dm-uuid-CRYPT-LUKS2-e02439364e5c4d798eb883df85650a2f-zilsod-jHoT-jpW1-7LoA-ebYG-18lo-NV682y'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:32.897240 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:35:32.897254 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:32.897267 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--da0b55d5--13d5--528b--aee2--5667f342587c-osd--block--da0b55d5--13d5--528b--aee2--5667f342587c', 'dm-uuid-LVM-voIvMScBNf0nn1UqP6J3mrL57Feo8hpsEfbBIXBLL2lbnvB5fpXdf3Vs7Oc4nA8j'], 'uuids': ['26974dbf-f0a7-4ca8-8b18-f9eb0862be76'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'aca90cda', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['EfbBIX-BLL2-lbnv-B5fp-Xdf3-Vs7O-c4nA8j']}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:32.897280 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a64e844-a251-4ee7-a817-d55da64d6351', 'scsi-SQEMU_QEMU_HARDDISK_5a64e844-a251-4ee7-a817-d55da64d6351'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5a64e844', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:32.897305 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:35:32.897324 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-jppFpT-6287-H5UX-wadw-idvL-aDwi-H3fsQH', 'scsi-0QEMU_QEMU_HARDDISK_627ac388-afe2-405e-bfb6-93a96eeb5247', 'scsi-SQEMU_QEMU_HARDDISK_627ac388-afe2-405e-bfb6-93a96eeb5247'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '627ac388', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb-osd--block--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb']}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:32.983327 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:32.983425 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:32.983441 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-31-01-38-47-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:32.983454 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:32.983503 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:32.983532 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--185c377e--da3e--5428--98db--747be321d2f9-osd--block--185c377e--da3e--5428--98db--747be321d2f9', 'dm-uuid-LVM-x16wR0JSkJwOUat6KB2RjtOnd6k2ruBp3Senp6or7C3BHvrbv8KuFHdSdmwvdICC'], 'uuids': ['4a48fb33-b599-4c4d-a815-d018d343a3ff'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0036be6c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['3Senp6-or7C-3BHv-rbv8-KuFH-dSdm-wvdICC']}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:32.983546 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-jN9Ywl-XbnL-hNii-unic-nne9-TiGA-xFnCN2', 'dm-uuid-CRYPT-LUKS2-c911a2b9ffbe4994aafa7327c1153c91-jN9Ywl-XbnL-hNii-unic-nne9-TiGA-xFnCN2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:32.983558 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:32.983570 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1382055-b12a-4a0d-90b0-6b0bf5b2002d', 'scsi-SQEMU_QEMU_HARDDISK_d1382055-b12a-4a0d-90b0-6b0bf5b2002d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd1382055', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:32.983595 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb-osd--block--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb', 'dm-uuid-LVM-RwD1SDPPywNrcOLsCdJUWJCkPqisEw7IjN9YwlXbnLhNiiunicnne9TiGAxFnCN2'], 'uuids': ['c911a2b9-ffbe-4994-aafa-7327c1153c91'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '627ac388', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['jN9Ywl-XbnL-hNii-unic-nne9-TiGA-xFnCN2']}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:32.983612 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-pfZnnD-Ultt-g92I-R3gj-okuR-Ezub-rBAf3f', 'scsi-0QEMU_QEMU_HARDDISK_aca90cda-810a-4a3a-a8a4-a9246b552814', 'scsi-SQEMU_QEMU_HARDDISK_aca90cda-810a-4a3a-a8a4-a9246b552814'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'aca90cda', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--da0b55d5--13d5--528b--aee2--5667f342587c-osd--block--da0b55d5--13d5--528b--aee2--5667f342587c']}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:33.034137 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-bwm83I-k31i-pwme-XT9I-9Z0g-1hP0-CwgXOd', 'scsi-0QEMU_QEMU_HARDDISK_cee620fc-9fd6-4c5e-b237-9b955e0088ae', 'scsi-SQEMU_QEMU_HARDDISK_cee620fc-9fd6-4c5e-b237-9b955e0088ae'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'cee620fc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--07ced279--a583--5107--8220--95f80fc10ac7-osd--block--07ced279--a583--5107--8220--95f80fc10ac7']}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:33.034237 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:33.034253 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:33.034325 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9459331e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part16', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part14', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part15', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part1', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:33.034343 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:33.034355 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:33.034367 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-31-01-38-44-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:33.034387 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:33.034404 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:33.034425 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-yKTWsV-enR4-4CqY-2klB-eRO2-fR5A-XJ6GI1', 'dm-uuid-CRYPT-LUKS2-74b5eafc2cf149539043240c66b113f2-yKTWsV-enR4-4CqY-2klB-eRO2-fR5A-XJ6GI1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:33.094100 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-EfbBIX-BLL2-lbnv-B5fp-Xdf3-Vs7O-c4nA8j', 'dm-uuid-CRYPT-LUKS2-26974dbff0a74ca88b18f9eb0862be76-EfbBIX-BLL2-lbnv-B5fp-Xdf3-Vs7O-c4nA8j'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:33.094195 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:33.094241 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--07ced279--a583--5107--8220--95f80fc10ac7-osd--block--07ced279--a583--5107--8220--95f80fc10ac7', 'dm-uuid-LVM-4Lb9QdMZv1ai74sfHiNB7SWQCThlMxSwyKTWsVenR44CqY2klBeRO2fR5AXJ6GI1'], 'uuids': ['74b5eafc-2cf1-4953-9043-240c66b113f2'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'cee620fc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['yKTWsV-enR4-4CqY-2klB-eRO2-fR5A-XJ6GI1']}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:33.094256 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:35:33.094283 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-zgTsa4-r5F1-H4rU-9oqC-nOys-qaba-d4ei1Y', 'scsi-0QEMU_QEMU_HARDDISK_0036be6c-41d0-4a1c-804a-c8bed222bda7', 'scsi-SQEMU_QEMU_HARDDISK_0036be6c-41d0-4a1c-804a-c8bed222bda7'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0036be6c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--185c377e--da3e--5428--98db--747be321d2f9-osd--block--185c377e--da3e--5428--98db--747be321d2f9']}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:33.094299 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:33.094335 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f91d726b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part16', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part14', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part15', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part1', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:33.094356 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:33.094374 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:33.094387 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:33.094406 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:34.328171 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-3Senp6-or7C-3BHv-rbv8-KuFH-dSdm-wvdICC', 'dm-uuid-CRYPT-LUKS2-4a48fb33b5994c4da815d018d343a3ff-3Senp6-or7C-3BHv-rbv8-KuFH-dSdm-wvdICC'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:34.328304 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:34.328322 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:35:34.328337 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-31-01-39-13-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:34.328368 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:34.328381 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:34.328394 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:34.328439 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_972f9726-ae68-4000-ae51-611d4e82d0e5', 'scsi-SQEMU_QEMU_HARDDISK_972f9726-ae68-4000-ae51-611d4e82d0e5'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '972f9726', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_972f9726-ae68-4000-ae51-611d4e82d0e5-part16', 'scsi-SQEMU_QEMU_HARDDISK_972f9726-ae68-4000-ae51-611d4e82d0e5-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_972f9726-ae68-4000-ae51-611d4e82d0e5-part14', 'scsi-SQEMU_QEMU_HARDDISK_972f9726-ae68-4000-ae51-611d4e82d0e5-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_972f9726-ae68-4000-ae51-611d4e82d0e5-part15', 'scsi-SQEMU_QEMU_HARDDISK_972f9726-ae68-4000-ae51-611d4e82d0e5-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_972f9726-ae68-4000-ae51-611d4e82d0e5-part1', 'scsi-SQEMU_QEMU_HARDDISK_972f9726-ae68-4000-ae51-611d4e82d0e5-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:34.328554 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:34.328581 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:35:34.328601 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:35:34.328659 | orchestrator | 2026-03-31 04:35:34.328683 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-31 04:35:34.328705 | orchestrator | Tuesday 31 March 2026 04:35:33 +0000 (0:00:01.522) 0:01:05.873 ********* 2026-03-31 04:35:34.328724 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:35:34.328744 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:35:34.328764 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:35:34.328782 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:35:34.328802 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:35:34.328821 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:35:34.328842 | orchestrator | ok: [testbed-manager] 2026-03-31 04:35:34.328856 | orchestrator | 2026-03-31 04:35:34.328868 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-31 04:35:34.328903 | orchestrator | Tuesday 31 March 2026 04:35:34 +0000 (0:00:01.125) 0:01:06.999 ********* 2026-03-31 04:35:47.310252 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:35:47.310383 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:35:47.310406 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:35:47.310420 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:35:47.310433 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:35:47.310447 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:35:47.310459 | orchestrator | ok: [testbed-manager] 2026-03-31 04:35:47.310473 | orchestrator | 2026-03-31 04:35:47.310490 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-31 04:35:47.310506 | orchestrator | Tuesday 31 March 2026 04:35:35 +0000 (0:00:00.955) 0:01:07.954 ********* 2026-03-31 04:35:47.310521 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:35:47.310534 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:35:47.310548 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:35:47.310562 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:35:47.310575 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:35:47.310589 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:35:47.310604 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:35:47.310620 | orchestrator | 2026-03-31 04:35:47.310635 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-31 04:35:47.310676 | orchestrator | Tuesday 31 March 2026 04:35:36 +0000 (0:00:00.973) 0:01:08.928 ********* 2026-03-31 04:35:47.310693 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:35:47.310709 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:35:47.310725 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:35:47.310741 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:35:47.310757 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:35:47.310773 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:35:47.310787 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:35:47.310798 | orchestrator | 2026-03-31 04:35:47.310808 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-31 04:35:47.310819 | orchestrator | Tuesday 31 March 2026 04:35:37 +0000 (0:00:00.970) 0:01:09.899 ********* 2026-03-31 04:35:47.310829 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:35:47.310840 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:35:47.310850 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:35:47.310860 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:35:47.310871 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:35:47.310881 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:35:47.310891 | orchestrator | ok: [testbed-manager -> testbed-node-2(192.168.16.12)] 2026-03-31 04:35:47.310901 | orchestrator | 2026-03-31 04:35:47.310911 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-31 04:35:47.310921 | orchestrator | Tuesday 31 March 2026 04:35:39 +0000 (0:00:01.844) 0:01:11.744 ********* 2026-03-31 04:35:47.310932 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:35:47.310941 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:35:47.310951 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:35:47.310961 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:35:47.310972 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:35:47.310982 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:35:47.310992 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:35:47.311002 | orchestrator | 2026-03-31 04:35:47.311012 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-31 04:35:47.311022 | orchestrator | Tuesday 31 March 2026 04:35:39 +0000 (0:00:00.782) 0:01:12.526 ********* 2026-03-31 04:35:47.311031 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-31 04:35:47.311040 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-31 04:35:47.311049 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-31 04:35:47.311074 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-31 04:35:47.311106 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-31 04:35:47.311115 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-31 04:35:47.311123 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-31 04:35:47.311132 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-31 04:35:47.311141 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-31 04:35:47.311149 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-31 04:35:47.311158 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-31 04:35:47.311166 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-31 04:35:47.311175 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-31 04:35:47.311184 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-31 04:35:47.311192 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-31 04:35:47.311201 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-31 04:35:47.311210 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-31 04:35:47.311218 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-31 04:35:47.311227 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-31 04:35:47.311237 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-31 04:35:47.311252 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-31 04:35:47.311267 | orchestrator | 2026-03-31 04:35:47.311282 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-31 04:35:47.311296 | orchestrator | Tuesday 31 March 2026 04:35:42 +0000 (0:00:02.212) 0:01:14.739 ********* 2026-03-31 04:35:47.311310 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-31 04:35:47.311325 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-31 04:35:47.311339 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-31 04:35:47.311354 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:35:47.311369 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-31 04:35:47.311382 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-31 04:35:47.311396 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-31 04:35:47.311412 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:35:47.311426 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-31 04:35:47.311440 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-31 04:35:47.311478 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-31 04:35:47.311494 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:35:47.311510 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-31 04:35:47.311524 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-31 04:35:47.311535 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-31 04:35:47.311544 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:35:47.311552 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-31 04:35:47.311561 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-31 04:35:47.311569 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-31 04:35:47.311578 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:35:47.311586 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-31 04:35:47.311595 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-31 04:35:47.311603 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-31 04:35:47.311612 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:35:47.311621 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-31 04:35:47.311629 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-31 04:35:47.311638 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-31 04:35:47.311681 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:35:47.311700 | orchestrator | 2026-03-31 04:35:47.311709 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-31 04:35:47.311718 | orchestrator | Tuesday 31 March 2026 04:35:42 +0000 (0:00:00.854) 0:01:15.594 ********* 2026-03-31 04:35:47.311727 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:35:47.311735 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:35:47.311744 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:35:47.311753 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:35:47.311763 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 04:35:47.311772 | orchestrator | 2026-03-31 04:35:47.311781 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-31 04:35:47.311791 | orchestrator | Tuesday 31 March 2026 04:35:44 +0000 (0:00:01.171) 0:01:16.765 ********* 2026-03-31 04:35:47.311800 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:35:47.311809 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:35:47.311817 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:35:47.311826 | orchestrator | 2026-03-31 04:35:47.311835 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-31 04:35:47.311844 | orchestrator | Tuesday 31 March 2026 04:35:44 +0000 (0:00:00.327) 0:01:17.093 ********* 2026-03-31 04:35:47.311853 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:35:47.311861 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:35:47.311870 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:35:47.311879 | orchestrator | 2026-03-31 04:35:47.311887 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-31 04:35:47.311896 | orchestrator | Tuesday 31 March 2026 04:35:44 +0000 (0:00:00.335) 0:01:17.428 ********* 2026-03-31 04:35:47.311905 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:35:47.311913 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:35:47.311929 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:35:47.311938 | orchestrator | 2026-03-31 04:35:47.311947 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-31 04:35:47.311956 | orchestrator | Tuesday 31 March 2026 04:35:45 +0000 (0:00:00.568) 0:01:17.997 ********* 2026-03-31 04:35:47.311964 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:35:47.311973 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:35:47.311982 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:35:47.311990 | orchestrator | 2026-03-31 04:35:47.311999 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-31 04:35:47.312008 | orchestrator | Tuesday 31 March 2026 04:35:45 +0000 (0:00:00.456) 0:01:18.454 ********* 2026-03-31 04:35:47.312016 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-31 04:35:47.312025 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-31 04:35:47.312034 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-31 04:35:47.312042 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:35:47.312051 | orchestrator | 2026-03-31 04:35:47.312060 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-31 04:35:47.312068 | orchestrator | Tuesday 31 March 2026 04:35:46 +0000 (0:00:00.391) 0:01:18.845 ********* 2026-03-31 04:35:47.312077 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-31 04:35:47.312086 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-31 04:35:47.312094 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-31 04:35:47.312103 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:35:47.312112 | orchestrator | 2026-03-31 04:35:47.312120 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-31 04:35:47.312129 | orchestrator | Tuesday 31 March 2026 04:35:46 +0000 (0:00:00.392) 0:01:19.237 ********* 2026-03-31 04:35:47.312138 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-31 04:35:47.312152 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-31 04:35:47.312161 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-31 04:35:47.312170 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:35:47.312179 | orchestrator | 2026-03-31 04:35:47.312187 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-31 04:35:47.312196 | orchestrator | Tuesday 31 March 2026 04:35:46 +0000 (0:00:00.397) 0:01:19.635 ********* 2026-03-31 04:35:47.312205 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:35:47.312214 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:35:47.312222 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:35:47.312231 | orchestrator | 2026-03-31 04:35:47.312240 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-31 04:35:47.312256 | orchestrator | Tuesday 31 March 2026 04:35:47 +0000 (0:00:00.341) 0:01:19.977 ********* 2026-03-31 04:36:23.285412 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-31 04:36:23.285566 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-31 04:36:23.285585 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-31 04:36:23.285598 | orchestrator | 2026-03-31 04:36:23.285612 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-31 04:36:23.285635 | orchestrator | Tuesday 31 March 2026 04:35:48 +0000 (0:00:01.125) 0:01:21.102 ********* 2026-03-31 04:36:23.285648 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-31 04:36:23.285659 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:36:23.285672 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:36:23.285684 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-31 04:36:23.285695 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-31 04:36:23.285706 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-31 04:36:23.285763 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-31 04:36:23.285775 | orchestrator | 2026-03-31 04:36:23.285787 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-31 04:36:23.285799 | orchestrator | Tuesday 31 March 2026 04:35:49 +0000 (0:00:00.807) 0:01:21.909 ********* 2026-03-31 04:36:23.285810 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-31 04:36:23.285821 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:36:23.285832 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:36:23.285843 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-31 04:36:23.285854 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-31 04:36:23.285865 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-31 04:36:23.285876 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-31 04:36:23.285887 | orchestrator | 2026-03-31 04:36:23.285898 | orchestrator | TASK [ceph-infra : Update cache for Debian based OSs] ************************** 2026-03-31 04:36:23.285909 | orchestrator | Tuesday 31 March 2026 04:35:51 +0000 (0:00:02.146) 0:01:24.055 ********* 2026-03-31 04:36:23.285921 | orchestrator | changed: [testbed-node-3] 2026-03-31 04:36:23.285934 | orchestrator | changed: [testbed-node-4] 2026-03-31 04:36:23.285947 | orchestrator | changed: [testbed-node-5] 2026-03-31 04:36:23.285959 | orchestrator | changed: [testbed-manager] 2026-03-31 04:36:23.285972 | orchestrator | changed: [testbed-node-1] 2026-03-31 04:36:23.285984 | orchestrator | changed: [testbed-node-0] 2026-03-31 04:36:23.285997 | orchestrator | changed: [testbed-node-2] 2026-03-31 04:36:23.286009 | orchestrator | 2026-03-31 04:36:23.286068 | orchestrator | TASK [ceph-infra : Include_tasks configure_firewall.yml] *********************** 2026-03-31 04:36:23.286124 | orchestrator | Tuesday 31 March 2026 04:36:09 +0000 (0:00:17.771) 0:01:41.827 ********* 2026-03-31 04:36:23.286137 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:36:23.286150 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:36:23.286163 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:36:23.286176 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:36:23.286189 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:36:23.286201 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:36:23.286214 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:36:23.286226 | orchestrator | 2026-03-31 04:36:23.286239 | orchestrator | TASK [ceph-infra : Include_tasks setup_ntp.yml] ******************************** 2026-03-31 04:36:23.286252 | orchestrator | Tuesday 31 March 2026 04:36:09 +0000 (0:00:00.686) 0:01:42.513 ********* 2026-03-31 04:36:23.286264 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:36:23.286277 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:36:23.286290 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:36:23.286302 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:36:23.286313 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:36:23.286324 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:36:23.286335 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:36:23.286345 | orchestrator | 2026-03-31 04:36:23.286356 | orchestrator | TASK [ceph-infra : Add logrotate configuration] ******************************** 2026-03-31 04:36:23.286367 | orchestrator | Tuesday 31 March 2026 04:36:10 +0000 (0:00:01.019) 0:01:43.532 ********* 2026-03-31 04:36:23.286378 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:36:23.286391 | orchestrator | changed: [testbed-node-1] 2026-03-31 04:36:23.286411 | orchestrator | changed: [testbed-node-0] 2026-03-31 04:36:23.286430 | orchestrator | changed: [testbed-node-2] 2026-03-31 04:36:23.286448 | orchestrator | changed: [testbed-node-3] 2026-03-31 04:36:23.286467 | orchestrator | changed: [testbed-node-4] 2026-03-31 04:36:23.286485 | orchestrator | changed: [testbed-node-5] 2026-03-31 04:36:23.286503 | orchestrator | 2026-03-31 04:36:23.286521 | orchestrator | TASK [ceph-validate : Include check_system.yml] ******************************** 2026-03-31 04:36:23.286539 | orchestrator | Tuesday 31 March 2026 04:36:13 +0000 (0:00:02.168) 0:01:45.700 ********* 2026-03-31 04:36:23.286560 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_system.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-03-31 04:36:23.286580 | orchestrator | 2026-03-31 04:36:23.286600 | orchestrator | TASK [ceph-validate : Fail on unsupported ansible version (1.X)] *************** 2026-03-31 04:36:23.286619 | orchestrator | Tuesday 31 March 2026 04:36:14 +0000 (0:00:01.459) 0:01:47.160 ********* 2026-03-31 04:36:23.286639 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:36:23.286658 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:36:23.286676 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:36:23.286695 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:36:23.286741 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:36:23.286790 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:36:23.286810 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:36:23.286823 | orchestrator | 2026-03-31 04:36:23.286834 | orchestrator | TASK [ceph-validate : Fail on unsupported system] ****************************** 2026-03-31 04:36:23.286845 | orchestrator | Tuesday 31 March 2026 04:36:15 +0000 (0:00:00.964) 0:01:48.124 ********* 2026-03-31 04:36:23.286856 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:36:23.286867 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:36:23.286878 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:36:23.286888 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:36:23.286899 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:36:23.286910 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:36:23.286920 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:36:23.286931 | orchestrator | 2026-03-31 04:36:23.286942 | orchestrator | TASK [ceph-validate : Fail on unsupported architecture] ************************ 2026-03-31 04:36:23.286953 | orchestrator | Tuesday 31 March 2026 04:36:16 +0000 (0:00:00.730) 0:01:48.854 ********* 2026-03-31 04:36:23.286976 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:36:23.286986 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:36:23.286997 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:36:23.287008 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:36:23.287019 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:36:23.287029 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:36:23.287040 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:36:23.287051 | orchestrator | 2026-03-31 04:36:23.287062 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution] ************************ 2026-03-31 04:36:23.287073 | orchestrator | Tuesday 31 March 2026 04:36:17 +0000 (0:00:00.998) 0:01:49.853 ********* 2026-03-31 04:36:23.287084 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:36:23.287094 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:36:23.287105 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:36:23.287116 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:36:23.287126 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:36:23.287137 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:36:23.287148 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:36:23.287159 | orchestrator | 2026-03-31 04:36:23.287170 | orchestrator | TASK [ceph-validate : Fail on unsupported CentOS release] ********************** 2026-03-31 04:36:23.287180 | orchestrator | Tuesday 31 March 2026 04:36:17 +0000 (0:00:00.716) 0:01:50.569 ********* 2026-03-31 04:36:23.287191 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:36:23.287202 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:36:23.287213 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:36:23.287224 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:36:23.287234 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:36:23.287245 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:36:23.287256 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:36:23.287267 | orchestrator | 2026-03-31 04:36:23.287278 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution for ubuntu cloud archive] *** 2026-03-31 04:36:23.287289 | orchestrator | Tuesday 31 March 2026 04:36:18 +0000 (0:00:00.958) 0:01:51.527 ********* 2026-03-31 04:36:23.287299 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:36:23.287310 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:36:23.287321 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:36:23.287332 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:36:23.287343 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:36:23.287354 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:36:23.287373 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:36:23.287384 | orchestrator | 2026-03-31 04:36:23.287395 | orchestrator | TASK [ceph-validate : Fail on unsupported SUSE/openSUSE distribution (only 15.x supported)] *** 2026-03-31 04:36:23.287406 | orchestrator | Tuesday 31 March 2026 04:36:19 +0000 (0:00:00.724) 0:01:52.252 ********* 2026-03-31 04:36:23.287417 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:36:23.287427 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:36:23.287438 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:36:23.287449 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:36:23.287459 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:36:23.287470 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:36:23.287481 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:36:23.287491 | orchestrator | 2026-03-31 04:36:23.287502 | orchestrator | TASK [ceph-validate : Fail if systemd is not present] ************************** 2026-03-31 04:36:23.287513 | orchestrator | Tuesday 31 March 2026 04:36:20 +0000 (0:00:01.033) 0:01:53.286 ********* 2026-03-31 04:36:23.287524 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:36:23.287535 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:36:23.287545 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:36:23.287556 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:36:23.287567 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:36:23.287578 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:36:23.287595 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:36:23.287606 | orchestrator | 2026-03-31 04:36:23.287617 | orchestrator | TASK [ceph-validate : Validate repository variables in non-containerized scenario] *** 2026-03-31 04:36:23.287628 | orchestrator | Tuesday 31 March 2026 04:36:21 +0000 (0:00:00.736) 0:01:54.022 ********* 2026-03-31 04:36:23.287639 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:36:23.287649 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:36:23.287660 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:36:23.287671 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:36:23.287681 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:36:23.287692 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:36:23.287703 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:36:23.287734 | orchestrator | 2026-03-31 04:36:23.287746 | orchestrator | TASK [ceph-validate : Validate osd_objectstore] ******************************** 2026-03-31 04:36:23.287757 | orchestrator | Tuesday 31 March 2026 04:36:22 +0000 (0:00:00.980) 0:01:55.002 ********* 2026-03-31 04:36:23.287768 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:36:23.287779 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:36:23.287790 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:36:23.287801 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:36:23.287812 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:36:23.287823 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:36:23.287834 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:36:23.287845 | orchestrator | 2026-03-31 04:36:23.287856 | orchestrator | TASK [ceph-validate : Validate radosgw network configuration] ****************** 2026-03-31 04:36:23.287875 | orchestrator | Tuesday 31 March 2026 04:36:23 +0000 (0:00:00.947) 0:01:55.949 ********* 2026-03-31 04:36:33.514007 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:36:33.514190 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:36:33.514207 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:36:33.514220 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:36:33.514231 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:36:33.514242 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:36:33.514253 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:36:33.514265 | orchestrator | 2026-03-31 04:36:33.514277 | orchestrator | TASK [ceph-validate : Validate lvm osd scenario] ******************************* 2026-03-31 04:36:33.514290 | orchestrator | Tuesday 31 March 2026 04:36:24 +0000 (0:00:00.750) 0:01:56.700 ********* 2026-03-31 04:36:33.514301 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:36:33.514312 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:36:33.514323 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:36:33.514334 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:36:33.514345 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:36:33.514355 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:36:33.514366 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:36:33.514377 | orchestrator | 2026-03-31 04:36:33.514388 | orchestrator | TASK [ceph-validate : Validate bluestore lvm osd scenario] ********************* 2026-03-31 04:36:33.514399 | orchestrator | Tuesday 31 March 2026 04:36:25 +0000 (0:00:01.012) 0:01:57.713 ********* 2026-03-31 04:36:33.514410 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:36:33.514420 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:36:33.514431 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:36:33.514443 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dad98f55-09f4-5a2b-a5c7-aafce2660c53', 'data_vg': 'ceph-dad98f55-09f4-5a2b-a5c7-aafce2660c53'})  2026-03-31 04:36:33.514455 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-67174221-9040-517a-ae84-daf8ebd704d7', 'data_vg': 'ceph-67174221-9040-517a-ae84-daf8ebd704d7'})  2026-03-31 04:36:33.514467 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb', 'data_vg': 'ceph-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb'})  2026-03-31 04:36:33.514477 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-da0b55d5-13d5-528b-aee2-5667f342587c', 'data_vg': 'ceph-da0b55d5-13d5-528b-aee2-5667f342587c'})  2026-03-31 04:36:33.514512 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:36:33.514526 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07ced279-a583-5107-8220-95f80fc10ac7', 'data_vg': 'ceph-07ced279-a583-5107-8220-95f80fc10ac7'})  2026-03-31 04:36:33.514539 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185c377e-da3e-5428-98db-747be321d2f9', 'data_vg': 'ceph-185c377e-da3e-5428-98db-747be321d2f9'})  2026-03-31 04:36:33.514552 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:36:33.514564 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:36:33.514576 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:36:33.514589 | orchestrator | 2026-03-31 04:36:33.514616 | orchestrator | TASK [ceph-validate : Fail if local scenario is enabled on debian] ************* 2026-03-31 04:36:33.514629 | orchestrator | Tuesday 31 March 2026 04:36:25 +0000 (0:00:00.739) 0:01:58.452 ********* 2026-03-31 04:36:33.514642 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:36:33.514654 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:36:33.514665 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:36:33.514676 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:36:33.514686 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:36:33.514697 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:36:33.514708 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:36:33.514719 | orchestrator | 2026-03-31 04:36:33.514815 | orchestrator | TASK [ceph-validate : Fail if rhcs repository is enabled on debian] ************ 2026-03-31 04:36:33.514833 | orchestrator | Tuesday 31 March 2026 04:36:26 +0000 (0:00:01.053) 0:01:59.506 ********* 2026-03-31 04:36:33.514844 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:36:33.514855 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:36:33.514866 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:36:33.514876 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:36:33.514887 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:36:33.514898 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:36:33.514909 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:36:33.514919 | orchestrator | 2026-03-31 04:36:33.514930 | orchestrator | TASK [ceph-validate : Check ceph_origin definition on SUSE/openSUSE Leap] ****** 2026-03-31 04:36:33.514941 | orchestrator | Tuesday 31 March 2026 04:36:27 +0000 (0:00:00.741) 0:02:00.247 ********* 2026-03-31 04:36:33.514952 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:36:33.514963 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:36:33.514973 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:36:33.514984 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:36:33.514995 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:36:33.515006 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:36:33.515016 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:36:33.515027 | orchestrator | 2026-03-31 04:36:33.515038 | orchestrator | TASK [ceph-validate : Check ceph_repository definition on SUSE/openSUSE Leap] *** 2026-03-31 04:36:33.515049 | orchestrator | Tuesday 31 March 2026 04:36:28 +0000 (0:00:00.993) 0:02:01.240 ********* 2026-03-31 04:36:33.515060 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:36:33.515071 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:36:33.515082 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:36:33.515092 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:36:33.515103 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:36:33.515114 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:36:33.515125 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:36:33.515136 | orchestrator | 2026-03-31 04:36:33.515146 | orchestrator | TASK [ceph-validate : Validate ntp daemon type] ******************************** 2026-03-31 04:36:33.515157 | orchestrator | Tuesday 31 March 2026 04:36:29 +0000 (0:00:00.727) 0:02:01.968 ********* 2026-03-31 04:36:33.515169 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:36:33.515179 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:36:33.515209 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:36:33.515221 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:36:33.515243 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:36:33.515254 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:36:33.515264 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:36:33.515275 | orchestrator | 2026-03-31 04:36:33.515286 | orchestrator | TASK [ceph-validate : Abort if ntp_daemon_type is ntpd on Atomic] ************** 2026-03-31 04:36:33.515297 | orchestrator | Tuesday 31 March 2026 04:36:30 +0000 (0:00:00.973) 0:02:02.941 ********* 2026-03-31 04:36:33.515308 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:36:33.515319 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:36:33.515330 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:36:33.515341 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:36:33.515352 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:36:33.515362 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:36:33.515373 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:36:33.515384 | orchestrator | 2026-03-31 04:36:33.515395 | orchestrator | TASK [ceph-validate : Include check_devices.yml] ******************************* 2026-03-31 04:36:33.515406 | orchestrator | Tuesday 31 March 2026 04:36:31 +0000 (0:00:00.933) 0:02:03.874 ********* 2026-03-31 04:36:33.515417 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:36:33.515428 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:36:33.515438 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:36:33.515449 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:36:33.515460 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_devices.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 04:36:33.515472 | orchestrator | 2026-03-31 04:36:33.515483 | orchestrator | TASK [ceph-validate : Set_fact root_device] ************************************ 2026-03-31 04:36:33.515494 | orchestrator | Tuesday 31 March 2026 04:36:32 +0000 (0:00:00.979) 0:02:04.854 ********* 2026-03-31 04:36:33.515505 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:36:33.515517 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:36:33.515528 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:36:33.515538 | orchestrator | 2026-03-31 04:36:33.515549 | orchestrator | TASK [ceph-validate : Resolve devices in lvm_volumes] ************************** 2026-03-31 04:36:33.515560 | orchestrator | Tuesday 31 March 2026 04:36:32 +0000 (0:00:00.563) 0:02:05.417 ********* 2026-03-31 04:36:33.515571 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dad98f55-09f4-5a2b-a5c7-aafce2660c53', 'data_vg': 'ceph-dad98f55-09f4-5a2b-a5c7-aafce2660c53'})  2026-03-31 04:36:33.515583 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-67174221-9040-517a-ae84-daf8ebd704d7', 'data_vg': 'ceph-67174221-9040-517a-ae84-daf8ebd704d7'})  2026-03-31 04:36:33.515594 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:36:33.515605 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb', 'data_vg': 'ceph-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb'})  2026-03-31 04:36:33.515616 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-da0b55d5-13d5-528b-aee2-5667f342587c', 'data_vg': 'ceph-da0b55d5-13d5-528b-aee2-5667f342587c'})  2026-03-31 04:36:33.515627 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:36:33.515644 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07ced279-a583-5107-8220-95f80fc10ac7', 'data_vg': 'ceph-07ced279-a583-5107-8220-95f80fc10ac7'})  2026-03-31 04:36:33.515655 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185c377e-da3e-5428-98db-747be321d2f9', 'data_vg': 'ceph-185c377e-da3e-5428-98db-747be321d2f9'})  2026-03-31 04:36:33.515666 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:36:33.515677 | orchestrator | 2026-03-31 04:36:33.515688 | orchestrator | TASK [ceph-validate : Set_fact lvm_volumes_data_devices] *********************** 2026-03-31 04:36:33.515699 | orchestrator | Tuesday 31 March 2026 04:36:33 +0000 (0:00:00.395) 0:02:05.812 ********* 2026-03-31 04:36:33.515712 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-dad98f55-09f4-5a2b-a5c7-aafce2660c53', 'data_vg': 'ceph-dad98f55-09f4-5a2b-a5c7-aafce2660c53'}, 'ansible_loop_var': 'item'})  2026-03-31 04:36:33.515753 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-67174221-9040-517a-ae84-daf8ebd704d7', 'data_vg': 'ceph-67174221-9040-517a-ae84-daf8ebd704d7'}, 'ansible_loop_var': 'item'})  2026-03-31 04:36:33.515765 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:36:33.515777 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb', 'data_vg': 'ceph-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb'}, 'ansible_loop_var': 'item'})  2026-03-31 04:36:33.515788 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-da0b55d5-13d5-528b-aee2-5667f342587c', 'data_vg': 'ceph-da0b55d5-13d5-528b-aee2-5667f342587c'}, 'ansible_loop_var': 'item'})  2026-03-31 04:36:33.515807 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:36:36.398804 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-07ced279-a583-5107-8220-95f80fc10ac7', 'data_vg': 'ceph-07ced279-a583-5107-8220-95f80fc10ac7'}, 'ansible_loop_var': 'item'})  2026-03-31 04:36:36.398915 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-185c377e-da3e-5428-98db-747be321d2f9', 'data_vg': 'ceph-185c377e-da3e-5428-98db-747be321d2f9'}, 'ansible_loop_var': 'item'})  2026-03-31 04:36:36.398933 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:36:36.398948 | orchestrator | 2026-03-31 04:36:36.398961 | orchestrator | TASK [ceph-validate : Fail if root_device is passed in lvm_volumes or devices] *** 2026-03-31 04:36:36.398973 | orchestrator | Tuesday 31 March 2026 04:36:33 +0000 (0:00:00.377) 0:02:06.190 ********* 2026-03-31 04:36:36.398985 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:36:36.398996 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:36:36.399007 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:36:36.399018 | orchestrator | 2026-03-31 04:36:36.399029 | orchestrator | TASK [ceph-validate : Get devices information] ********************************* 2026-03-31 04:36:36.399040 | orchestrator | Tuesday 31 March 2026 04:36:33 +0000 (0:00:00.330) 0:02:06.521 ********* 2026-03-31 04:36:36.399051 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:36:36.399062 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:36:36.399073 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:36:36.399084 | orchestrator | 2026-03-31 04:36:36.399095 | orchestrator | TASK [ceph-validate : Fail if one of the devices is not a device] ************** 2026-03-31 04:36:36.399106 | orchestrator | Tuesday 31 March 2026 04:36:34 +0000 (0:00:00.369) 0:02:06.890 ********* 2026-03-31 04:36:36.399117 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:36:36.399128 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:36:36.399139 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:36:36.399150 | orchestrator | 2026-03-31 04:36:36.399161 | orchestrator | TASK [ceph-validate : Fail when gpt header found on osd devices] *************** 2026-03-31 04:36:36.399172 | orchestrator | Tuesday 31 March 2026 04:36:34 +0000 (0:00:00.555) 0:02:07.446 ********* 2026-03-31 04:36:36.399185 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:36:36.399196 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:36:36.399207 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:36:36.399218 | orchestrator | 2026-03-31 04:36:36.399229 | orchestrator | TASK [ceph-validate : Check data logical volume] ******************************* 2026-03-31 04:36:36.399265 | orchestrator | Tuesday 31 March 2026 04:36:35 +0000 (0:00:00.353) 0:02:07.799 ********* 2026-03-31 04:36:36.399277 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-dad98f55-09f4-5a2b-a5c7-aafce2660c53', 'data_vg': 'ceph-dad98f55-09f4-5a2b-a5c7-aafce2660c53'}) 2026-03-31 04:36:36.399304 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb', 'data_vg': 'ceph-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb'}) 2026-03-31 04:36:36.399316 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-07ced279-a583-5107-8220-95f80fc10ac7', 'data_vg': 'ceph-07ced279-a583-5107-8220-95f80fc10ac7'}) 2026-03-31 04:36:36.399327 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-67174221-9040-517a-ae84-daf8ebd704d7', 'data_vg': 'ceph-67174221-9040-517a-ae84-daf8ebd704d7'}) 2026-03-31 04:36:36.399338 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-da0b55d5-13d5-528b-aee2-5667f342587c', 'data_vg': 'ceph-da0b55d5-13d5-528b-aee2-5667f342587c'}) 2026-03-31 04:36:36.399349 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-185c377e-da3e-5428-98db-747be321d2f9', 'data_vg': 'ceph-185c377e-da3e-5428-98db-747be321d2f9'}) 2026-03-31 04:36:36.399360 | orchestrator | 2026-03-31 04:36:36.399372 | orchestrator | TASK [ceph-validate : Fail if one of the data logical volume is not a device or doesn't exist] *** 2026-03-31 04:36:36.399383 | orchestrator | Tuesday 31 March 2026 04:36:36 +0000 (0:00:01.058) 0:02:08.857 ********* 2026-03-31 04:36:36.399419 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-dad98f55-09f4-5a2b-a5c7-aafce2660c53/osd-block-dad98f55-09f4-5a2b-a5c7-aafce2660c53', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 955, 'dev': 6, 'nlink': 1, 'atime': 1774925905.9111156, 'mtime': 1774925905.9071155, 'ctime': 1774925905.9071155, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-dad98f55-09f4-5a2b-a5c7-aafce2660c53/osd-block-dad98f55-09f4-5a2b-a5c7-aafce2660c53', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-dad98f55-09f4-5a2b-a5c7-aafce2660c53', 'data_vg': 'ceph-dad98f55-09f4-5a2b-a5c7-aafce2660c53'}, 'ansible_loop_var': 'item'})  2026-03-31 04:36:36.399437 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-67174221-9040-517a-ae84-daf8ebd704d7/osd-block-67174221-9040-517a-ae84-daf8ebd704d7', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 965, 'dev': 6, 'nlink': 1, 'atime': 1774925924.9254177, 'mtime': 1774925924.9204175, 'ctime': 1774925924.9204175, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-67174221-9040-517a-ae84-daf8ebd704d7/osd-block-67174221-9040-517a-ae84-daf8ebd704d7', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-67174221-9040-517a-ae84-daf8ebd704d7', 'data_vg': 'ceph-67174221-9040-517a-ae84-daf8ebd704d7'}, 'ansible_loop_var': 'item'})  2026-03-31 04:36:36.399459 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:36:36.399477 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb/osd-block-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 956, 'dev': 6, 'nlink': 1, 'atime': 1774925905.866796, 'mtime': 1774925905.8607957, 'ctime': 1774925905.8607957, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb/osd-block-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb', 'data_vg': 'ceph-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb'}, 'ansible_loop_var': 'item'})  2026-03-31 04:36:36.399499 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-da0b55d5-13d5-528b-aee2-5667f342587c/osd-block-da0b55d5-13d5-528b-aee2-5667f342587c', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 966, 'dev': 6, 'nlink': 1, 'atime': 1774925924.947092, 'mtime': 1774925924.943092, 'ctime': 1774925924.943092, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-da0b55d5-13d5-528b-aee2-5667f342587c/osd-block-da0b55d5-13d5-528b-aee2-5667f342587c', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-da0b55d5-13d5-528b-aee2-5667f342587c', 'data_vg': 'ceph-da0b55d5-13d5-528b-aee2-5667f342587c'}, 'ansible_loop_var': 'item'})  2026-03-31 04:36:38.008725 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:36:38.008882 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-07ced279-a583-5107-8220-95f80fc10ac7/osd-block-07ced279-a583-5107-8220-95f80fc10ac7', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 953, 'dev': 6, 'nlink': 1, 'atime': 1774925910.0601265, 'mtime': 1774925910.0571265, 'ctime': 1774925910.0571265, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-07ced279-a583-5107-8220-95f80fc10ac7/osd-block-07ced279-a583-5107-8220-95f80fc10ac7', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-07ced279-a583-5107-8220-95f80fc10ac7', 'data_vg': 'ceph-07ced279-a583-5107-8220-95f80fc10ac7'}, 'ansible_loop_var': 'item'})  2026-03-31 04:36:38.008942 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-185c377e-da3e-5428-98db-747be321d2f9/osd-block-185c377e-da3e-5428-98db-747be321d2f9', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 963, 'dev': 6, 'nlink': 1, 'atime': 1774925929.231426, 'mtime': 1774925929.228426, 'ctime': 1774925929.228426, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-185c377e-da3e-5428-98db-747be321d2f9/osd-block-185c377e-da3e-5428-98db-747be321d2f9', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-185c377e-da3e-5428-98db-747be321d2f9', 'data_vg': 'ceph-185c377e-da3e-5428-98db-747be321d2f9'}, 'ansible_loop_var': 'item'})  2026-03-31 04:36:38.008958 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:36:38.008970 | orchestrator | 2026-03-31 04:36:38.008983 | orchestrator | TASK [ceph-validate : Check bluestore db logical volume] *********************** 2026-03-31 04:36:38.008995 | orchestrator | Tuesday 31 March 2026 04:36:36 +0000 (0:00:00.397) 0:02:09.255 ********* 2026-03-31 04:36:38.009007 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dad98f55-09f4-5a2b-a5c7-aafce2660c53', 'data_vg': 'ceph-dad98f55-09f4-5a2b-a5c7-aafce2660c53'})  2026-03-31 04:36:38.009020 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-67174221-9040-517a-ae84-daf8ebd704d7', 'data_vg': 'ceph-67174221-9040-517a-ae84-daf8ebd704d7'})  2026-03-31 04:36:38.009031 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:36:38.009042 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb', 'data_vg': 'ceph-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb'})  2026-03-31 04:36:38.009053 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-da0b55d5-13d5-528b-aee2-5667f342587c', 'data_vg': 'ceph-da0b55d5-13d5-528b-aee2-5667f342587c'})  2026-03-31 04:36:38.009064 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:36:38.009075 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07ced279-a583-5107-8220-95f80fc10ac7', 'data_vg': 'ceph-07ced279-a583-5107-8220-95f80fc10ac7'})  2026-03-31 04:36:38.009086 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185c377e-da3e-5428-98db-747be321d2f9', 'data_vg': 'ceph-185c377e-da3e-5428-98db-747be321d2f9'})  2026-03-31 04:36:38.009097 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:36:38.009108 | orchestrator | 2026-03-31 04:36:38.009119 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore db logical volume is not a device or doesn't exist] *** 2026-03-31 04:36:38.009149 | orchestrator | Tuesday 31 March 2026 04:36:37 +0000 (0:00:00.612) 0:02:09.867 ********* 2026-03-31 04:36:38.009163 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-dad98f55-09f4-5a2b-a5c7-aafce2660c53', 'data_vg': 'ceph-dad98f55-09f4-5a2b-a5c7-aafce2660c53'}, 'ansible_loop_var': 'item'})  2026-03-31 04:36:38.009176 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-67174221-9040-517a-ae84-daf8ebd704d7', 'data_vg': 'ceph-67174221-9040-517a-ae84-daf8ebd704d7'}, 'ansible_loop_var': 'item'})  2026-03-31 04:36:38.009196 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:36:38.009208 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb', 'data_vg': 'ceph-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb'}, 'ansible_loop_var': 'item'})  2026-03-31 04:36:38.009220 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-da0b55d5-13d5-528b-aee2-5667f342587c', 'data_vg': 'ceph-da0b55d5-13d5-528b-aee2-5667f342587c'}, 'ansible_loop_var': 'item'})  2026-03-31 04:36:38.009231 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:36:38.009242 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-07ced279-a583-5107-8220-95f80fc10ac7', 'data_vg': 'ceph-07ced279-a583-5107-8220-95f80fc10ac7'}, 'ansible_loop_var': 'item'})  2026-03-31 04:36:38.009259 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-185c377e-da3e-5428-98db-747be321d2f9', 'data_vg': 'ceph-185c377e-da3e-5428-98db-747be321d2f9'}, 'ansible_loop_var': 'item'})  2026-03-31 04:36:38.009270 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:36:38.009282 | orchestrator | 2026-03-31 04:36:38.009293 | orchestrator | TASK [ceph-validate : Check bluestore wal logical volume] ********************** 2026-03-31 04:36:38.009305 | orchestrator | Tuesday 31 March 2026 04:36:37 +0000 (0:00:00.368) 0:02:10.236 ********* 2026-03-31 04:36:38.009316 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dad98f55-09f4-5a2b-a5c7-aafce2660c53', 'data_vg': 'ceph-dad98f55-09f4-5a2b-a5c7-aafce2660c53'})  2026-03-31 04:36:38.009328 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-67174221-9040-517a-ae84-daf8ebd704d7', 'data_vg': 'ceph-67174221-9040-517a-ae84-daf8ebd704d7'})  2026-03-31 04:36:38.009340 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:36:38.009351 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb', 'data_vg': 'ceph-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb'})  2026-03-31 04:36:38.009362 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-da0b55d5-13d5-528b-aee2-5667f342587c', 'data_vg': 'ceph-da0b55d5-13d5-528b-aee2-5667f342587c'})  2026-03-31 04:36:38.009373 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:36:38.009384 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07ced279-a583-5107-8220-95f80fc10ac7', 'data_vg': 'ceph-07ced279-a583-5107-8220-95f80fc10ac7'})  2026-03-31 04:36:38.009395 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185c377e-da3e-5428-98db-747be321d2f9', 'data_vg': 'ceph-185c377e-da3e-5428-98db-747be321d2f9'})  2026-03-31 04:36:38.009406 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:36:38.009418 | orchestrator | 2026-03-31 04:36:38.009429 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore wal logical volume is not a device or doesn't exist] *** 2026-03-31 04:36:38.009440 | orchestrator | Tuesday 31 March 2026 04:36:37 +0000 (0:00:00.341) 0:02:10.578 ********* 2026-03-31 04:36:38.009452 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-dad98f55-09f4-5a2b-a5c7-aafce2660c53', 'data_vg': 'ceph-dad98f55-09f4-5a2b-a5c7-aafce2660c53'}, 'ansible_loop_var': 'item'})  2026-03-31 04:36:38.009470 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-67174221-9040-517a-ae84-daf8ebd704d7', 'data_vg': 'ceph-67174221-9040-517a-ae84-daf8ebd704d7'}, 'ansible_loop_var': 'item'})  2026-03-31 04:36:41.611962 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:36:41.612043 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb', 'data_vg': 'ceph-ff2f0fdf-59cf-5ca7-9eb2-a45b4abb67eb'}, 'ansible_loop_var': 'item'})  2026-03-31 04:36:41.612061 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-da0b55d5-13d5-528b-aee2-5667f342587c', 'data_vg': 'ceph-da0b55d5-13d5-528b-aee2-5667f342587c'}, 'ansible_loop_var': 'item'})  2026-03-31 04:36:41.612066 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:36:41.612071 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-07ced279-a583-5107-8220-95f80fc10ac7', 'data_vg': 'ceph-07ced279-a583-5107-8220-95f80fc10ac7'}, 'ansible_loop_var': 'item'})  2026-03-31 04:36:41.612081 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-185c377e-da3e-5428-98db-747be321d2f9', 'data_vg': 'ceph-185c377e-da3e-5428-98db-747be321d2f9'}, 'ansible_loop_var': 'item'})  2026-03-31 04:36:41.612085 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:36:41.612089 | orchestrator | 2026-03-31 04:36:41.612094 | orchestrator | TASK [ceph-validate : Include check_eth_rgw.yml] ******************************* 2026-03-31 04:36:41.612099 | orchestrator | Tuesday 31 March 2026 04:36:38 +0000 (0:00:00.379) 0:02:10.958 ********* 2026-03-31 04:36:41.612103 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:36:41.612107 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:36:41.612111 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:36:41.612114 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:36:41.612118 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:36:41.612122 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:36:41.612137 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:36:41.612141 | orchestrator | 2026-03-31 04:36:41.612145 | orchestrator | TASK [ceph-validate : Include check_rgw_pools.yml] ***************************** 2026-03-31 04:36:41.612149 | orchestrator | Tuesday 31 March 2026 04:36:39 +0000 (0:00:00.997) 0:02:11.955 ********* 2026-03-31 04:36:41.612154 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:36:41.612157 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:36:41.612161 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:36:41.612165 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:36:41.612170 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_rgw_pools.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 04:36:41.612174 | orchestrator | 2026-03-31 04:36:41.612178 | orchestrator | TASK [ceph-validate : Fail if ec_profile is not set for ec pools] ************** 2026-03-31 04:36:41.612182 | orchestrator | Tuesday 31 March 2026 04:36:40 +0000 (0:00:01.209) 0:02:13.165 ********* 2026-03-31 04:36:41.612186 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 04:36:41.612191 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 04:36:41.612195 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 04:36:41.612199 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 04:36:41.612217 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 04:36:41.612221 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:36:41.612225 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 04:36:41.612229 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 04:36:41.612235 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 04:36:41.612241 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 04:36:41.612247 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 04:36:41.612253 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:36:41.612260 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 04:36:41.612277 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 04:36:41.612284 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 04:36:41.612289 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 04:36:41.612295 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 04:36:41.612300 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:36:41.612306 | orchestrator | 2026-03-31 04:36:41.612311 | orchestrator | TASK [ceph-validate : Fail if ec_k is not set for ec pools] ******************** 2026-03-31 04:36:41.612316 | orchestrator | Tuesday 31 March 2026 04:36:40 +0000 (0:00:00.448) 0:02:13.614 ********* 2026-03-31 04:36:41.612322 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 04:36:41.612328 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 04:36:41.612335 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 04:36:41.612342 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 04:36:41.612348 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 04:36:41.612354 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:36:41.612360 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 04:36:41.612366 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 04:36:41.612378 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 04:36:41.612385 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 04:36:41.612392 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 04:36:41.612403 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:36:41.612407 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 04:36:41.612411 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 04:36:41.612415 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 04:36:41.612419 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 04:36:41.612422 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 04:36:41.612426 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:36:41.612430 | orchestrator | 2026-03-31 04:36:41.612434 | orchestrator | TASK [ceph-validate : Fail if ec_m is not set for ec pools] ******************** 2026-03-31 04:36:41.612438 | orchestrator | Tuesday 31 March 2026 04:36:41 +0000 (0:00:00.423) 0:02:14.037 ********* 2026-03-31 04:36:41.612441 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 04:36:41.612446 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 04:36:41.612449 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 04:36:41.612453 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 04:36:41.612457 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 04:36:41.612461 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:36:41.612465 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 04:36:41.612468 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 04:36:41.612476 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 04:36:48.466005 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 04:36:48.466130 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 04:36:48.466141 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:36:48.466152 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 04:36:48.466159 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 04:36:48.466168 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 04:36:48.466175 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 04:36:48.466183 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 04:36:48.466190 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:36:48.466198 | orchestrator | 2026-03-31 04:36:48.466207 | orchestrator | TASK [ceph-validate : Include check_nfs.yml] *********************************** 2026-03-31 04:36:48.466216 | orchestrator | Tuesday 31 March 2026 04:36:41 +0000 (0:00:00.424) 0:02:14.462 ********* 2026-03-31 04:36:48.466243 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:36:48.466251 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:36:48.466259 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:36:48.466266 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:36:48.466273 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:36:48.466281 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:36:48.466288 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:36:48.466295 | orchestrator | 2026-03-31 04:36:48.466302 | orchestrator | TASK [ceph-validate : Include check_rbdmirror.yml] ***************************** 2026-03-31 04:36:48.466310 | orchestrator | Tuesday 31 March 2026 04:36:42 +0000 (0:00:00.953) 0:02:15.416 ********* 2026-03-31 04:36:48.466317 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:36:48.466324 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:36:48.466344 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:36:48.466352 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:36:48.466359 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:36:48.466366 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:36:48.466373 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:36:48.466380 | orchestrator | 2026-03-31 04:36:48.466388 | orchestrator | TASK [ceph-validate : Fail if monitoring group doesn't exist] ****************** 2026-03-31 04:36:48.466395 | orchestrator | Tuesday 31 March 2026 04:36:43 +0000 (0:00:00.728) 0:02:16.144 ********* 2026-03-31 04:36:48.466402 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:36:48.466410 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:36:48.466436 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:36:48.466444 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:36:48.466451 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:36:48.466458 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:36:48.466466 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:36:48.466473 | orchestrator | 2026-03-31 04:36:48.466480 | orchestrator | TASK [ceph-validate : Fail when monitoring doesn't contain at least one node.] *** 2026-03-31 04:36:48.466488 | orchestrator | Tuesday 31 March 2026 04:36:44 +0000 (0:00:00.993) 0:02:17.138 ********* 2026-03-31 04:36:48.466496 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:36:48.466503 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:36:48.466510 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:36:48.466517 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:36:48.466525 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:36:48.466532 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:36:48.466539 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:36:48.466546 | orchestrator | 2026-03-31 04:36:48.466555 | orchestrator | TASK [ceph-validate : Fail when dashboard_admin_password and/or grafana_admin_password are not set] *** 2026-03-31 04:36:48.466564 | orchestrator | Tuesday 31 March 2026 04:36:45 +0000 (0:00:00.741) 0:02:17.879 ********* 2026-03-31 04:36:48.466572 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:36:48.466580 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:36:48.466589 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:36:48.466597 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:36:48.466605 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:36:48.466613 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:36:48.466622 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:36:48.466630 | orchestrator | 2026-03-31 04:36:48.466638 | orchestrator | TASK [ceph-validate : Validate container registry credentials] ***************** 2026-03-31 04:36:48.466646 | orchestrator | Tuesday 31 March 2026 04:36:46 +0000 (0:00:00.973) 0:02:18.853 ********* 2026-03-31 04:36:48.466655 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:36:48.466663 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:36:48.466671 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:36:48.466680 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:36:48.466687 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:36:48.466696 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:36:48.466710 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:36:48.466718 | orchestrator | 2026-03-31 04:36:48.466726 | orchestrator | TASK [ceph-validate : Validate container service and container package] ******** 2026-03-31 04:36:48.466735 | orchestrator | Tuesday 31 March 2026 04:36:47 +0000 (0:00:00.964) 0:02:19.818 ********* 2026-03-31 04:36:48.466743 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:36:48.466751 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:36:48.466785 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:36:48.466793 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:36:48.466802 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:36:48.466810 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:36:48.466818 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:36:48.466826 | orchestrator | 2026-03-31 04:36:48.466848 | orchestrator | TASK [ceph-validate : Validate openstack_keys key format] ********************** 2026-03-31 04:36:48.466858 | orchestrator | Tuesday 31 March 2026 04:36:47 +0000 (0:00:00.759) 0:02:20.578 ********* 2026-03-31 04:36:48.466867 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-31 04:36:48.466877 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-31 04:36:48.466887 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-31 04:36:48.466896 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-31 04:36:48.466905 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-31 04:36:48.466916 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-31 04:36:48.466925 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:36:48.466933 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-31 04:36:48.466940 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-31 04:36:48.466952 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-31 04:36:48.466960 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-31 04:36:48.466967 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-31 04:36:48.466974 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-31 04:36:48.466981 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:36:48.466989 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-31 04:36:48.466996 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-31 04:36:48.467008 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-31 04:36:48.467016 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-31 04:36:48.467023 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-31 04:36:48.467030 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-31 04:36:48.467037 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:36:48.467044 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-31 04:36:48.467051 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-31 04:36:48.467063 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-31 04:36:50.720708 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-31 04:36:50.720891 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-31 04:36:50.720910 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-31 04:36:50.720923 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-31 04:36:50.720937 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-31 04:36:50.720950 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-31 04:36:50.720961 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-31 04:36:50.720972 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-31 04:36:50.720983 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-31 04:36:50.721012 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-31 04:36:50.721024 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-31 04:36:50.721035 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-31 04:36:50.721068 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-31 04:36:50.721079 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-31 04:36:50.721091 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:36:50.721103 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:36:50.721114 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-31 04:36:50.721125 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-31 04:36:50.721136 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-31 04:36:50.721147 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-31 04:36:50.721158 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-31 04:36:50.721169 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-31 04:36:50.721180 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:36:50.721191 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-31 04:36:50.721202 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:36:50.721213 | orchestrator | 2026-03-31 04:36:50.721244 | orchestrator | TASK [ceph-validate : Validate clients keys key format] ************************ 2026-03-31 04:36:50.721260 | orchestrator | Tuesday 31 March 2026 04:36:49 +0000 (0:00:01.242) 0:02:21.820 ********* 2026-03-31 04:36:50.721273 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:36:50.721286 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:36:50.721298 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:36:50.721311 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:36:50.721323 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:36:50.721335 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:36:50.721347 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:36:50.721360 | orchestrator | 2026-03-31 04:36:50.721373 | orchestrator | TASK [ceph-validate : Validate openstack_keys caps] **************************** 2026-03-31 04:36:50.721385 | orchestrator | Tuesday 31 March 2026 04:36:49 +0000 (0:00:00.756) 0:02:22.577 ********* 2026-03-31 04:36:50.721398 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-31 04:36:50.721411 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-31 04:36:50.721424 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-31 04:36:50.721436 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-31 04:36:50.721456 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-31 04:36:50.721470 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-31 04:36:50.721488 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-31 04:36:50.721500 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-31 04:36:50.721513 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-31 04:36:50.721525 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-31 04:36:50.721538 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-31 04:36:50.721551 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-31 04:36:50.721564 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:36:50.721576 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-31 04:36:50.721588 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-31 04:36:50.721600 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-31 04:36:50.721611 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-31 04:36:50.721622 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-31 04:36:50.721633 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-31 04:36:50.721644 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:36:50.721661 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-31 04:37:00.350982 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-31 04:37:00.351115 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-31 04:37:00.351135 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-31 04:37:00.351148 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-31 04:37:00.351185 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-31 04:37:00.351198 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:37:00.351210 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:37:00.351221 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-31 04:37:00.351233 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-31 04:37:00.351244 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-31 04:37:00.351270 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-31 04:37:00.351282 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-31 04:37:00.351293 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-31 04:37:00.351305 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-31 04:37:00.351316 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-31 04:37:00.351330 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-31 04:37:00.351346 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-31 04:37:00.351357 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:37:00.351368 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-31 04:37:00.351379 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-31 04:37:00.351390 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-31 04:37:00.351401 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-31 04:37:00.351412 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-31 04:37:00.351423 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:37:00.351434 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-31 04:37:00.351471 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-31 04:37:00.351483 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-31 04:37:00.351495 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:37:00.351507 | orchestrator | 2026-03-31 04:37:00.351523 | orchestrator | TASK [ceph-validate : Validate clients keys caps] ****************************** 2026-03-31 04:37:00.351546 | orchestrator | Tuesday 31 March 2026 04:36:51 +0000 (0:00:01.348) 0:02:23.925 ********* 2026-03-31 04:37:00.351566 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:37:00.351580 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:37:00.351593 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:37:00.351606 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:37:00.351619 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:37:00.351631 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:37:00.351651 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:37:00.351671 | orchestrator | 2026-03-31 04:37:00.351689 | orchestrator | TASK [ceph-validate : Check virtual_ips is defined] **************************** 2026-03-31 04:37:00.351703 | orchestrator | Tuesday 31 March 2026 04:36:51 +0000 (0:00:00.748) 0:02:24.673 ********* 2026-03-31 04:37:00.351715 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:37:00.351728 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:37:00.351741 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:37:00.351754 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:37:00.351766 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:37:00.351822 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:37:00.351846 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:37:00.351866 | orchestrator | 2026-03-31 04:37:00.351882 | orchestrator | TASK [ceph-validate : Validate virtual_ips length] ***************************** 2026-03-31 04:37:00.351898 | orchestrator | Tuesday 31 March 2026 04:36:52 +0000 (0:00:00.993) 0:02:25.667 ********* 2026-03-31 04:37:00.351918 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:37:00.351936 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:37:00.351948 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:37:00.351959 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:37:00.351970 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:37:00.351988 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:37:00.352000 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:37:00.352011 | orchestrator | 2026-03-31 04:37:00.352022 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-03-31 04:37:00.352033 | orchestrator | Tuesday 31 March 2026 04:36:54 +0000 (0:00:01.469) 0:02:27.137 ********* 2026-03-31 04:37:00.352044 | orchestrator | included: /ansible/roles/ceph-container-engine/tasks/pre_requisites/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-03-31 04:37:00.352057 | orchestrator | 2026-03-31 04:37:00.352068 | orchestrator | TASK [ceph-container-engine : Include specific variables] ********************** 2026-03-31 04:37:00.352079 | orchestrator | Tuesday 31 March 2026 04:36:56 +0000 (0:00:01.608) 0:02:28.746 ********* 2026-03-31 04:37:00.352090 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-31 04:37:00.352102 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-31 04:37:00.352115 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-31 04:37:00.352135 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-31 04:37:00.352153 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-31 04:37:00.352165 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-31 04:37:00.352185 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-31 04:37:00.352197 | orchestrator | 2026-03-31 04:37:00.352208 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override directory] **** 2026-03-31 04:37:00.352219 | orchestrator | Tuesday 31 March 2026 04:36:57 +0000 (0:00:01.164) 0:02:29.911 ********* 2026-03-31 04:37:00.352230 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:37:00.352240 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:37:00.352251 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:37:00.352263 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:37:00.352274 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:37:00.352285 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:37:00.352296 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:37:00.352307 | orchestrator | 2026-03-31 04:37:00.352318 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override file] ********* 2026-03-31 04:37:00.352329 | orchestrator | Tuesday 31 March 2026 04:36:58 +0000 (0:00:00.803) 0:02:30.714 ********* 2026-03-31 04:37:00.352340 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:37:00.352351 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:37:00.352362 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:37:00.352373 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:37:00.352383 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:37:00.352394 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:37:00.352405 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:37:00.352416 | orchestrator | 2026-03-31 04:37:00.352427 | orchestrator | TASK [ceph-container-engine : Remove docker proxy configuration] *************** 2026-03-31 04:37:00.352438 | orchestrator | Tuesday 31 March 2026 04:36:59 +0000 (0:00:01.084) 0:02:31.799 ********* 2026-03-31 04:37:00.352449 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:37:00.352461 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:37:00.352472 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:37:00.352483 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:37:00.352494 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:37:00.352514 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:37:22.374205 | orchestrator | ok: [testbed-manager] 2026-03-31 04:37:22.374324 | orchestrator | 2026-03-31 04:37:22.374344 | orchestrator | TASK [ceph-container-engine : Restart docker] ********************************** 2026-03-31 04:37:22.374358 | orchestrator | Tuesday 31 March 2026 04:37:00 +0000 (0:00:01.221) 0:02:33.020 ********* 2026-03-31 04:37:22.374370 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:37:22.374382 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:37:22.374394 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:37:22.374405 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:37:22.374416 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:37:22.374427 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:37:22.374439 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:37:22.374451 | orchestrator | 2026-03-31 04:37:22.374462 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-03-31 04:37:22.374473 | orchestrator | Tuesday 31 March 2026 04:37:01 +0000 (0:00:01.514) 0:02:34.535 ********* 2026-03-31 04:37:22.374485 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:37:22.374496 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:37:22.374507 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:37:22.374517 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:37:22.374528 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:37:22.374539 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:37:22.374550 | orchestrator | skipping: [testbed-manager] 2026-03-31 04:37:22.374561 | orchestrator | 2026-03-31 04:37:22.374572 | orchestrator | TASK [Get the ceph release being deployed] ************************************* 2026-03-31 04:37:22.374584 | orchestrator | Tuesday 31 March 2026 04:37:03 +0000 (0:00:01.505) 0:02:36.040 ********* 2026-03-31 04:37:22.374595 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:37:22.374606 | orchestrator | 2026-03-31 04:37:22.374639 | orchestrator | TASK [Check ceph release being deployed] *************************************** 2026-03-31 04:37:22.374650 | orchestrator | Tuesday 31 March 2026 04:37:05 +0000 (0:00:02.034) 0:02:38.074 ********* 2026-03-31 04:37:22.374661 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:37:22.374672 | orchestrator | 2026-03-31 04:37:22.374683 | orchestrator | PLAY [Ensure cluster config is applied] **************************************** 2026-03-31 04:37:22.374695 | orchestrator | 2026-03-31 04:37:22.374705 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-31 04:37:22.374716 | orchestrator | Tuesday 31 March 2026 04:37:05 +0000 (0:00:00.310) 0:02:38.384 ********* 2026-03-31 04:37:22.374736 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:37:22.374756 | orchestrator | 2026-03-31 04:37:22.374794 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-31 04:37:22.374839 | orchestrator | Tuesday 31 March 2026 04:37:06 +0000 (0:00:00.483) 0:02:38.868 ********* 2026-03-31 04:37:22.374859 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:37:22.374905 | orchestrator | 2026-03-31 04:37:22.374927 | orchestrator | TASK [Set cluster configs] ***************************************************** 2026-03-31 04:37:22.374947 | orchestrator | Tuesday 31 March 2026 04:37:06 +0000 (0:00:00.214) 0:02:39.083 ********* 2026-03-31 04:37:22.374961 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__844b88a37a697fc95420139c4fef42975660f41e'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-31 04:37:22.374975 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__844b88a37a697fc95420139c4fef42975660f41e'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-31 04:37:22.374987 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__844b88a37a697fc95420139c4fef42975660f41e'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-31 04:37:22.374998 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__844b88a37a697fc95420139c4fef42975660f41e'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-31 04:37:22.375011 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__844b88a37a697fc95420139c4fef42975660f41e'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-31 04:37:22.375044 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__844b88a37a697fc95420139c4fef42975660f41e'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__844b88a37a697fc95420139c4fef42975660f41e'}])  2026-03-31 04:37:22.375058 | orchestrator | 2026-03-31 04:37:22.375070 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-03-31 04:37:22.375081 | orchestrator | 2026-03-31 04:37:22.375092 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-03-31 04:37:22.375103 | orchestrator | Tuesday 31 March 2026 04:37:15 +0000 (0:00:08.786) 0:02:47.869 ********* 2026-03-31 04:37:22.375125 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:37:22.375136 | orchestrator | 2026-03-31 04:37:22.375147 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-03-31 04:37:22.375159 | orchestrator | Tuesday 31 March 2026 04:37:15 +0000 (0:00:00.471) 0:02:48.340 ********* 2026-03-31 04:37:22.375170 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:37:22.375180 | orchestrator | 2026-03-31 04:37:22.375191 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-03-31 04:37:22.375202 | orchestrator | Tuesday 31 March 2026 04:37:15 +0000 (0:00:00.141) 0:02:48.481 ********* 2026-03-31 04:37:22.375213 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:37:22.375225 | orchestrator | 2026-03-31 04:37:22.375236 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-03-31 04:37:22.375247 | orchestrator | Tuesday 31 March 2026 04:37:15 +0000 (0:00:00.132) 0:02:48.614 ********* 2026-03-31 04:37:22.375258 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:37:22.375269 | orchestrator | 2026-03-31 04:37:22.375280 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-31 04:37:22.375290 | orchestrator | Tuesday 31 March 2026 04:37:16 +0000 (0:00:00.161) 0:02:48.775 ********* 2026-03-31 04:37:22.375301 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-03-31 04:37:22.375312 | orchestrator | 2026-03-31 04:37:22.375323 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-31 04:37:22.375334 | orchestrator | Tuesday 31 March 2026 04:37:16 +0000 (0:00:00.482) 0:02:49.258 ********* 2026-03-31 04:37:22.375345 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:37:22.375356 | orchestrator | 2026-03-31 04:37:22.375367 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-31 04:37:22.375378 | orchestrator | Tuesday 31 March 2026 04:37:17 +0000 (0:00:00.452) 0:02:49.710 ********* 2026-03-31 04:37:22.375389 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:37:22.375400 | orchestrator | 2026-03-31 04:37:22.375411 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-31 04:37:22.375422 | orchestrator | Tuesday 31 March 2026 04:37:17 +0000 (0:00:00.138) 0:02:49.849 ********* 2026-03-31 04:37:22.375433 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:37:22.375444 | orchestrator | 2026-03-31 04:37:22.375455 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-31 04:37:22.375465 | orchestrator | Tuesday 31 March 2026 04:37:17 +0000 (0:00:00.471) 0:02:50.320 ********* 2026-03-31 04:37:22.375476 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:37:22.375487 | orchestrator | 2026-03-31 04:37:22.375498 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-31 04:37:22.375509 | orchestrator | Tuesday 31 March 2026 04:37:17 +0000 (0:00:00.166) 0:02:50.487 ********* 2026-03-31 04:37:22.375520 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:37:22.375531 | orchestrator | 2026-03-31 04:37:22.375542 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-31 04:37:22.375553 | orchestrator | Tuesday 31 March 2026 04:37:17 +0000 (0:00:00.146) 0:02:50.633 ********* 2026-03-31 04:37:22.375564 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:37:22.375575 | orchestrator | 2026-03-31 04:37:22.375586 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-31 04:37:22.375599 | orchestrator | Tuesday 31 March 2026 04:37:18 +0000 (0:00:00.155) 0:02:50.790 ********* 2026-03-31 04:37:22.375610 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:37:22.375621 | orchestrator | 2026-03-31 04:37:22.375632 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-31 04:37:22.375643 | orchestrator | Tuesday 31 March 2026 04:37:18 +0000 (0:00:00.156) 0:02:50.946 ********* 2026-03-31 04:37:22.375653 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:37:22.375664 | orchestrator | 2026-03-31 04:37:22.375675 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-31 04:37:22.375693 | orchestrator | Tuesday 31 March 2026 04:37:18 +0000 (0:00:00.158) 0:02:51.105 ********* 2026-03-31 04:37:22.375704 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-31 04:37:22.375715 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:37:22.375726 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:37:22.375737 | orchestrator | 2026-03-31 04:37:22.375748 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-31 04:37:22.375801 | orchestrator | Tuesday 31 March 2026 04:37:19 +0000 (0:00:00.924) 0:02:52.029 ********* 2026-03-31 04:37:22.375841 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:37:22.375862 | orchestrator | 2026-03-31 04:37:22.375881 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-31 04:37:22.375899 | orchestrator | Tuesday 31 March 2026 04:37:19 +0000 (0:00:00.247) 0:02:52.277 ********* 2026-03-31 04:37:22.375918 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-31 04:37:22.375932 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:37:22.375942 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:37:22.375953 | orchestrator | 2026-03-31 04:37:22.375964 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-31 04:37:22.375975 | orchestrator | Tuesday 31 March 2026 04:37:21 +0000 (0:00:02.065) 0:02:54.343 ********* 2026-03-31 04:37:22.375987 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-31 04:37:22.376008 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-31 04:37:27.894676 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-31 04:37:27.894785 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:37:27.894802 | orchestrator | 2026-03-31 04:37:27.894815 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-31 04:37:27.894874 | orchestrator | Tuesday 31 March 2026 04:37:22 +0000 (0:00:00.698) 0:02:55.041 ********* 2026-03-31 04:37:27.894888 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-31 04:37:27.894903 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-31 04:37:27.894915 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-31 04:37:27.894927 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:37:27.894938 | orchestrator | 2026-03-31 04:37:27.894949 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-31 04:37:27.894961 | orchestrator | Tuesday 31 March 2026 04:37:23 +0000 (0:00:01.183) 0:02:56.224 ********* 2026-03-31 04:37:27.894992 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 04:37:27.895006 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 04:37:27.895041 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 04:37:27.895053 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:37:27.895064 | orchestrator | 2026-03-31 04:37:27.895075 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-31 04:37:27.895087 | orchestrator | Tuesday 31 March 2026 04:37:23 +0000 (0:00:00.180) 0:02:56.405 ********* 2026-03-31 04:37:27.895101 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '80cb11f76dbe', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-31 04:37:20.082502', 'end': '2026-03-31 04:37:20.125532', 'delta': '0:00:00.043030', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['80cb11f76dbe'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-31 04:37:27.895135 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '1ea1d727f3e0', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-31 04:37:20.940752', 'end': '2026-03-31 04:37:20.984566', 'delta': '0:00:00.043814', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1ea1d727f3e0'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-31 04:37:27.895149 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'df3f30930c20', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-31 04:37:21.474672', 'end': '2026-03-31 04:37:21.522985', 'delta': '0:00:00.048313', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['df3f30930c20'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-31 04:37:27.895160 | orchestrator | 2026-03-31 04:37:27.895172 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-31 04:37:27.895184 | orchestrator | Tuesday 31 March 2026 04:37:23 +0000 (0:00:00.205) 0:02:56.610 ********* 2026-03-31 04:37:27.895198 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:37:27.895211 | orchestrator | 2026-03-31 04:37:27.895224 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-31 04:37:27.895236 | orchestrator | Tuesday 31 March 2026 04:37:24 +0000 (0:00:00.290) 0:02:56.901 ********* 2026-03-31 04:37:27.895250 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:37:27.895262 | orchestrator | 2026-03-31 04:37:27.895275 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-31 04:37:27.895288 | orchestrator | Tuesday 31 March 2026 04:37:24 +0000 (0:00:00.252) 0:02:57.154 ********* 2026-03-31 04:37:27.895308 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:37:27.895321 | orchestrator | 2026-03-31 04:37:27.895334 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-31 04:37:27.895346 | orchestrator | Tuesday 31 March 2026 04:37:24 +0000 (0:00:00.140) 0:02:57.294 ********* 2026-03-31 04:37:27.895364 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-03-31 04:37:27.895377 | orchestrator | 2026-03-31 04:37:27.895389 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-31 04:37:27.895402 | orchestrator | Tuesday 31 March 2026 04:37:25 +0000 (0:00:01.333) 0:02:58.627 ********* 2026-03-31 04:37:27.895414 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:37:27.895427 | orchestrator | 2026-03-31 04:37:27.895440 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-31 04:37:27.895453 | orchestrator | Tuesday 31 March 2026 04:37:26 +0000 (0:00:00.147) 0:02:58.774 ********* 2026-03-31 04:37:27.895466 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:37:27.895479 | orchestrator | 2026-03-31 04:37:27.895491 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-31 04:37:27.895504 | orchestrator | Tuesday 31 March 2026 04:37:26 +0000 (0:00:00.123) 0:02:58.897 ********* 2026-03-31 04:37:27.895516 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:37:27.895529 | orchestrator | 2026-03-31 04:37:27.895542 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-31 04:37:27.895553 | orchestrator | Tuesday 31 March 2026 04:37:26 +0000 (0:00:00.232) 0:02:59.130 ********* 2026-03-31 04:37:27.895564 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:37:27.895575 | orchestrator | 2026-03-31 04:37:27.895586 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-31 04:37:27.895597 | orchestrator | Tuesday 31 March 2026 04:37:26 +0000 (0:00:00.111) 0:02:59.241 ********* 2026-03-31 04:37:27.895608 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:37:27.895619 | orchestrator | 2026-03-31 04:37:27.895635 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-31 04:37:27.895653 | orchestrator | Tuesday 31 March 2026 04:37:26 +0000 (0:00:00.140) 0:02:59.382 ********* 2026-03-31 04:37:27.895681 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:37:27.895700 | orchestrator | 2026-03-31 04:37:27.895718 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-31 04:37:27.895735 | orchestrator | Tuesday 31 March 2026 04:37:26 +0000 (0:00:00.133) 0:02:59.515 ********* 2026-03-31 04:37:27.895751 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:37:27.895769 | orchestrator | 2026-03-31 04:37:27.895787 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-31 04:37:27.895804 | orchestrator | Tuesday 31 March 2026 04:37:26 +0000 (0:00:00.122) 0:02:59.638 ********* 2026-03-31 04:37:27.895822 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:37:27.895886 | orchestrator | 2026-03-31 04:37:27.895906 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-31 04:37:27.895926 | orchestrator | Tuesday 31 March 2026 04:37:27 +0000 (0:00:00.405) 0:03:00.044 ********* 2026-03-31 04:37:27.895946 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:37:27.895967 | orchestrator | 2026-03-31 04:37:27.895986 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-31 04:37:27.896007 | orchestrator | Tuesday 31 March 2026 04:37:27 +0000 (0:00:00.145) 0:03:00.190 ********* 2026-03-31 04:37:27.896028 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:37:27.896047 | orchestrator | 2026-03-31 04:37:27.896064 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-31 04:37:27.896076 | orchestrator | Tuesday 31 March 2026 04:37:27 +0000 (0:00:00.141) 0:03:00.332 ********* 2026-03-31 04:37:27.896099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:37:28.128372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:37:28.128480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:37:28.128500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-31-01-38-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-31 04:37:28.128533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:37:28.128546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:37:28.128557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:37:28.128593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '61782125', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part16', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part14', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part15', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part1', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-31 04:37:28.128631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:37:28.128649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:37:28.128662 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:37:28.128676 | orchestrator | 2026-03-31 04:37:28.128688 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-31 04:37:28.128700 | orchestrator | Tuesday 31 March 2026 04:37:27 +0000 (0:00:00.239) 0:03:00.571 ********* 2026-03-31 04:37:28.128713 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:37:28.128727 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:37:28.128738 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:37:28.128767 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-31-01-38-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:37:32.223242 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:37:32.223392 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:37:32.223415 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:37:32.223452 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '61782125', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part16', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part14', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part15', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part1', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:37:32.223492 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:37:32.223512 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:37:32.223524 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:37:32.223538 | orchestrator | 2026-03-31 04:37:32.223552 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-31 04:37:32.223565 | orchestrator | Tuesday 31 March 2026 04:37:28 +0000 (0:00:00.233) 0:03:00.804 ********* 2026-03-31 04:37:32.223576 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:37:32.223588 | orchestrator | 2026-03-31 04:37:32.223599 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-31 04:37:32.223610 | orchestrator | Tuesday 31 March 2026 04:37:28 +0000 (0:00:00.493) 0:03:01.297 ********* 2026-03-31 04:37:32.223621 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:37:32.223632 | orchestrator | 2026-03-31 04:37:32.223643 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-31 04:37:32.223654 | orchestrator | Tuesday 31 March 2026 04:37:28 +0000 (0:00:00.159) 0:03:01.457 ********* 2026-03-31 04:37:32.223665 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:37:32.223676 | orchestrator | 2026-03-31 04:37:32.223687 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-31 04:37:32.223698 | orchestrator | Tuesday 31 March 2026 04:37:29 +0000 (0:00:00.507) 0:03:01.965 ********* 2026-03-31 04:37:32.223709 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:37:32.223719 | orchestrator | 2026-03-31 04:37:32.223730 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-31 04:37:32.223742 | orchestrator | Tuesday 31 March 2026 04:37:29 +0000 (0:00:00.148) 0:03:02.114 ********* 2026-03-31 04:37:32.223762 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:37:32.223775 | orchestrator | 2026-03-31 04:37:32.223788 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-31 04:37:32.223800 | orchestrator | Tuesday 31 March 2026 04:37:29 +0000 (0:00:00.235) 0:03:02.349 ********* 2026-03-31 04:37:32.223812 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:37:32.223824 | orchestrator | 2026-03-31 04:37:32.223873 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-31 04:37:32.223891 | orchestrator | Tuesday 31 March 2026 04:37:29 +0000 (0:00:00.143) 0:03:02.492 ********* 2026-03-31 04:37:32.223904 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-31 04:37:32.223916 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-31 04:37:32.223929 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-31 04:37:32.223941 | orchestrator | 2026-03-31 04:37:32.223954 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-31 04:37:32.223967 | orchestrator | Tuesday 31 March 2026 04:37:30 +0000 (0:00:00.998) 0:03:03.491 ********* 2026-03-31 04:37:32.223980 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-31 04:37:32.223993 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-31 04:37:32.224006 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-31 04:37:32.224018 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:37:32.224031 | orchestrator | 2026-03-31 04:37:32.224044 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-31 04:37:32.224057 | orchestrator | Tuesday 31 March 2026 04:37:30 +0000 (0:00:00.174) 0:03:03.665 ********* 2026-03-31 04:37:32.224069 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:37:32.224081 | orchestrator | 2026-03-31 04:37:32.224094 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-31 04:37:32.224107 | orchestrator | Tuesday 31 March 2026 04:37:31 +0000 (0:00:00.419) 0:03:04.084 ********* 2026-03-31 04:37:32.224120 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-31 04:37:32.224133 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:37:32.224145 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:37:32.224156 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-31 04:37:32.224167 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-31 04:37:32.224187 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-31 04:38:00.179717 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-31 04:38:00.179836 | orchestrator | 2026-03-31 04:38:00.179853 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-31 04:38:00.179867 | orchestrator | Tuesday 31 March 2026 04:37:32 +0000 (0:00:00.810) 0:03:04.895 ********* 2026-03-31 04:38:00.179926 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-31 04:38:00.179939 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:38:00.179951 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:38:00.179962 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-31 04:38:00.179973 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-31 04:38:00.179985 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-31 04:38:00.179996 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-31 04:38:00.180007 | orchestrator | 2026-03-31 04:38:00.180018 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-03-31 04:38:00.180029 | orchestrator | Tuesday 31 March 2026 04:37:33 +0000 (0:00:01.609) 0:03:06.504 ********* 2026-03-31 04:38:00.180081 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-03-31 04:38:00.180093 | orchestrator | 2026-03-31 04:38:00.180104 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-03-31 04:38:00.180115 | orchestrator | Tuesday 31 March 2026 04:37:35 +0000 (0:00:01.269) 0:03:07.774 ********* 2026-03-31 04:38:00.180127 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:00.180138 | orchestrator | 2026-03-31 04:38:00.180149 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-03-31 04:38:00.180160 | orchestrator | Tuesday 31 March 2026 04:37:35 +0000 (0:00:00.244) 0:03:08.018 ********* 2026-03-31 04:38:00.180171 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:00.180182 | orchestrator | 2026-03-31 04:38:00.180193 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-03-31 04:38:00.180204 | orchestrator | Tuesday 31 March 2026 04:37:35 +0000 (0:00:00.140) 0:03:08.158 ********* 2026-03-31 04:38:00.180215 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-03-31 04:38:00.180226 | orchestrator | 2026-03-31 04:38:00.180237 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-03-31 04:38:00.180248 | orchestrator | Tuesday 31 March 2026 04:37:36 +0000 (0:00:01.191) 0:03:09.350 ********* 2026-03-31 04:38:00.180261 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:00.180275 | orchestrator | 2026-03-31 04:38:00.180288 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-03-31 04:38:00.180302 | orchestrator | Tuesday 31 March 2026 04:37:36 +0000 (0:00:00.136) 0:03:09.487 ********* 2026-03-31 04:38:00.180315 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-31 04:38:00.180329 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:38:00.180342 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:38:00.180355 | orchestrator | 2026-03-31 04:38:00.180367 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-03-31 04:38:00.180380 | orchestrator | Tuesday 31 March 2026 04:37:38 +0000 (0:00:01.477) 0:03:10.964 ********* 2026-03-31 04:38:00.180393 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd', 'testbed-node-0']) 2026-03-31 04:38:00.180406 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd', 'testbed-node-1']) 2026-03-31 04:38:00.180420 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd', 'testbed-node-2']) 2026-03-31 04:38:00.180433 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd-mirror', 'testbed-node-0']) 2026-03-31 04:38:00.180446 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd-mirror', 'testbed-node-1']) 2026-03-31 04:38:00.180460 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd-mirror', 'testbed-node-2']) 2026-03-31 04:38:00.180473 | orchestrator | 2026-03-31 04:38:00.180486 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-03-31 04:38:00.180499 | orchestrator | Tuesday 31 March 2026 04:37:50 +0000 (0:00:11.792) 0:03:22.756 ********* 2026-03-31 04:38:00.180512 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-03-31 04:38:00.180526 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-31 04:38:00.180538 | orchestrator | 2026-03-31 04:38:00.180551 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-03-31 04:38:00.180564 | orchestrator | Tuesday 31 March 2026 04:37:53 +0000 (0:00:03.239) 0:03:25.996 ********* 2026-03-31 04:38:00.180577 | orchestrator | changed: [testbed-node-0] 2026-03-31 04:38:00.180590 | orchestrator | 2026-03-31 04:38:00.180604 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-31 04:38:00.180616 | orchestrator | Tuesday 31 March 2026 04:37:54 +0000 (0:00:01.441) 0:03:27.438 ********* 2026-03-31 04:38:00.180627 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-03-31 04:38:00.180660 | orchestrator | 2026-03-31 04:38:00.180683 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-31 04:38:00.180694 | orchestrator | Tuesday 31 March 2026 04:37:55 +0000 (0:00:00.519) 0:03:27.957 ********* 2026-03-31 04:38:00.180723 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-03-31 04:38:00.180735 | orchestrator | 2026-03-31 04:38:00.180746 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-31 04:38:00.180757 | orchestrator | Tuesday 31 March 2026 04:37:55 +0000 (0:00:00.243) 0:03:28.200 ********* 2026-03-31 04:38:00.180768 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:38:00.180780 | orchestrator | 2026-03-31 04:38:00.180791 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-31 04:38:00.180802 | orchestrator | Tuesday 31 March 2026 04:37:56 +0000 (0:00:00.514) 0:03:28.715 ********* 2026-03-31 04:38:00.180813 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:00.180825 | orchestrator | 2026-03-31 04:38:00.180836 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-31 04:38:00.180846 | orchestrator | Tuesday 31 March 2026 04:37:56 +0000 (0:00:00.143) 0:03:28.858 ********* 2026-03-31 04:38:00.180857 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:00.180868 | orchestrator | 2026-03-31 04:38:00.180901 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-31 04:38:00.180913 | orchestrator | Tuesday 31 March 2026 04:37:56 +0000 (0:00:00.129) 0:03:28.988 ********* 2026-03-31 04:38:00.180923 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:00.180934 | orchestrator | 2026-03-31 04:38:00.180945 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-31 04:38:00.180956 | orchestrator | Tuesday 31 March 2026 04:37:56 +0000 (0:00:00.127) 0:03:29.115 ********* 2026-03-31 04:38:00.180967 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:38:00.180978 | orchestrator | 2026-03-31 04:38:00.180995 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-31 04:38:00.181006 | orchestrator | Tuesday 31 March 2026 04:37:56 +0000 (0:00:00.550) 0:03:29.666 ********* 2026-03-31 04:38:00.181017 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:00.181028 | orchestrator | 2026-03-31 04:38:00.181039 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-31 04:38:00.181050 | orchestrator | Tuesday 31 March 2026 04:37:57 +0000 (0:00:00.141) 0:03:29.808 ********* 2026-03-31 04:38:00.181061 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:00.181072 | orchestrator | 2026-03-31 04:38:00.181083 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-31 04:38:00.181094 | orchestrator | Tuesday 31 March 2026 04:37:57 +0000 (0:00:00.131) 0:03:29.939 ********* 2026-03-31 04:38:00.181105 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:38:00.181116 | orchestrator | 2026-03-31 04:38:00.181127 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-31 04:38:00.181138 | orchestrator | Tuesday 31 March 2026 04:37:57 +0000 (0:00:00.514) 0:03:30.454 ********* 2026-03-31 04:38:00.181149 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:38:00.181160 | orchestrator | 2026-03-31 04:38:00.181171 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-31 04:38:00.181182 | orchestrator | Tuesday 31 March 2026 04:37:58 +0000 (0:00:00.530) 0:03:30.985 ********* 2026-03-31 04:38:00.181193 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:00.181204 | orchestrator | 2026-03-31 04:38:00.181214 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-31 04:38:00.181225 | orchestrator | Tuesday 31 March 2026 04:37:58 +0000 (0:00:00.369) 0:03:31.354 ********* 2026-03-31 04:38:00.181236 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:38:00.181247 | orchestrator | 2026-03-31 04:38:00.181258 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-31 04:38:00.181269 | orchestrator | Tuesday 31 March 2026 04:37:58 +0000 (0:00:00.169) 0:03:31.524 ********* 2026-03-31 04:38:00.181287 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:00.181298 | orchestrator | 2026-03-31 04:38:00.181309 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-31 04:38:00.181320 | orchestrator | Tuesday 31 March 2026 04:37:58 +0000 (0:00:00.118) 0:03:31.643 ********* 2026-03-31 04:38:00.181331 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:00.181342 | orchestrator | 2026-03-31 04:38:00.181353 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-31 04:38:00.181363 | orchestrator | Tuesday 31 March 2026 04:37:59 +0000 (0:00:00.145) 0:03:31.788 ********* 2026-03-31 04:38:00.181374 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:00.181385 | orchestrator | 2026-03-31 04:38:00.181396 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-31 04:38:00.181407 | orchestrator | Tuesday 31 March 2026 04:37:59 +0000 (0:00:00.145) 0:03:31.933 ********* 2026-03-31 04:38:00.181418 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:00.181429 | orchestrator | 2026-03-31 04:38:00.181440 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-31 04:38:00.181451 | orchestrator | Tuesday 31 March 2026 04:37:59 +0000 (0:00:00.124) 0:03:32.057 ********* 2026-03-31 04:38:00.181462 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:00.181473 | orchestrator | 2026-03-31 04:38:00.181484 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-31 04:38:00.181495 | orchestrator | Tuesday 31 March 2026 04:37:59 +0000 (0:00:00.140) 0:03:32.198 ********* 2026-03-31 04:38:00.181505 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:38:00.181516 | orchestrator | 2026-03-31 04:38:00.181527 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-31 04:38:00.181538 | orchestrator | Tuesday 31 March 2026 04:37:59 +0000 (0:00:00.155) 0:03:32.354 ********* 2026-03-31 04:38:00.181549 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:38:00.181560 | orchestrator | 2026-03-31 04:38:00.181571 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-31 04:38:00.181582 | orchestrator | Tuesday 31 March 2026 04:37:59 +0000 (0:00:00.142) 0:03:32.497 ********* 2026-03-31 04:38:00.181593 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:38:00.181603 | orchestrator | 2026-03-31 04:38:00.181614 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-31 04:38:00.181625 | orchestrator | Tuesday 31 March 2026 04:38:00 +0000 (0:00:00.235) 0:03:32.732 ********* 2026-03-31 04:38:00.181636 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:00.181647 | orchestrator | 2026-03-31 04:38:00.181658 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-31 04:38:00.181675 | orchestrator | Tuesday 31 March 2026 04:38:00 +0000 (0:00:00.115) 0:03:32.848 ********* 2026-03-31 04:38:12.171872 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:12.172008 | orchestrator | 2026-03-31 04:38:12.172021 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-31 04:38:12.172030 | orchestrator | Tuesday 31 March 2026 04:38:00 +0000 (0:00:00.118) 0:03:32.966 ********* 2026-03-31 04:38:12.172038 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:12.172046 | orchestrator | 2026-03-31 04:38:12.172054 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-31 04:38:12.172062 | orchestrator | Tuesday 31 March 2026 04:38:00 +0000 (0:00:00.411) 0:03:33.378 ********* 2026-03-31 04:38:12.172070 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:12.172077 | orchestrator | 2026-03-31 04:38:12.172085 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-31 04:38:12.172092 | orchestrator | Tuesday 31 March 2026 04:38:00 +0000 (0:00:00.133) 0:03:33.512 ********* 2026-03-31 04:38:12.172100 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:12.172107 | orchestrator | 2026-03-31 04:38:12.172115 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-31 04:38:12.172122 | orchestrator | Tuesday 31 March 2026 04:38:00 +0000 (0:00:00.140) 0:03:33.652 ********* 2026-03-31 04:38:12.172147 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:12.172155 | orchestrator | 2026-03-31 04:38:12.172163 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-31 04:38:12.172170 | orchestrator | Tuesday 31 March 2026 04:38:01 +0000 (0:00:00.143) 0:03:33.795 ********* 2026-03-31 04:38:12.172188 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:12.172196 | orchestrator | 2026-03-31 04:38:12.172203 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-31 04:38:12.172212 | orchestrator | Tuesday 31 March 2026 04:38:01 +0000 (0:00:00.140) 0:03:33.936 ********* 2026-03-31 04:38:12.172219 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:12.172227 | orchestrator | 2026-03-31 04:38:12.172234 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-31 04:38:12.172241 | orchestrator | Tuesday 31 March 2026 04:38:01 +0000 (0:00:00.133) 0:03:34.069 ********* 2026-03-31 04:38:12.172249 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:12.172256 | orchestrator | 2026-03-31 04:38:12.172264 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-31 04:38:12.172271 | orchestrator | Tuesday 31 March 2026 04:38:01 +0000 (0:00:00.128) 0:03:34.198 ********* 2026-03-31 04:38:12.172279 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:12.172286 | orchestrator | 2026-03-31 04:38:12.172294 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-31 04:38:12.172301 | orchestrator | Tuesday 31 March 2026 04:38:01 +0000 (0:00:00.128) 0:03:34.326 ********* 2026-03-31 04:38:12.172309 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:12.172316 | orchestrator | 2026-03-31 04:38:12.172323 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-31 04:38:12.172331 | orchestrator | Tuesday 31 March 2026 04:38:01 +0000 (0:00:00.140) 0:03:34.467 ********* 2026-03-31 04:38:12.172338 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:12.172346 | orchestrator | 2026-03-31 04:38:12.172353 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-31 04:38:12.172360 | orchestrator | Tuesday 31 March 2026 04:38:01 +0000 (0:00:00.190) 0:03:34.657 ********* 2026-03-31 04:38:12.172368 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:38:12.172376 | orchestrator | 2026-03-31 04:38:12.172384 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-31 04:38:12.172391 | orchestrator | Tuesday 31 March 2026 04:38:02 +0000 (0:00:00.948) 0:03:35.605 ********* 2026-03-31 04:38:12.172399 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:38:12.172406 | orchestrator | 2026-03-31 04:38:12.172413 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-31 04:38:12.172421 | orchestrator | Tuesday 31 March 2026 04:38:04 +0000 (0:00:01.387) 0:03:36.993 ********* 2026-03-31 04:38:12.172428 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-03-31 04:38:12.172436 | orchestrator | 2026-03-31 04:38:12.172443 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-31 04:38:12.172451 | orchestrator | Tuesday 31 March 2026 04:38:04 +0000 (0:00:00.466) 0:03:37.460 ********* 2026-03-31 04:38:12.172458 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:12.172466 | orchestrator | 2026-03-31 04:38:12.172473 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-31 04:38:12.172481 | orchestrator | Tuesday 31 March 2026 04:38:04 +0000 (0:00:00.137) 0:03:37.597 ********* 2026-03-31 04:38:12.172488 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:12.172496 | orchestrator | 2026-03-31 04:38:12.172503 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-31 04:38:12.172510 | orchestrator | Tuesday 31 March 2026 04:38:05 +0000 (0:00:00.148) 0:03:37.746 ********* 2026-03-31 04:38:12.172518 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-31 04:38:12.172525 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-31 04:38:12.172538 | orchestrator | 2026-03-31 04:38:12.172546 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-31 04:38:12.172554 | orchestrator | Tuesday 31 March 2026 04:38:05 +0000 (0:00:00.860) 0:03:38.607 ********* 2026-03-31 04:38:12.172561 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:38:12.172568 | orchestrator | 2026-03-31 04:38:12.172576 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-31 04:38:12.172583 | orchestrator | Tuesday 31 March 2026 04:38:06 +0000 (0:00:00.613) 0:03:39.220 ********* 2026-03-31 04:38:12.172590 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:12.172598 | orchestrator | 2026-03-31 04:38:12.172605 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-31 04:38:12.172613 | orchestrator | Tuesday 31 March 2026 04:38:06 +0000 (0:00:00.146) 0:03:39.367 ********* 2026-03-31 04:38:12.172620 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:12.172628 | orchestrator | 2026-03-31 04:38:12.172648 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-31 04:38:12.172656 | orchestrator | Tuesday 31 March 2026 04:38:06 +0000 (0:00:00.136) 0:03:39.504 ********* 2026-03-31 04:38:12.172663 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:12.172671 | orchestrator | 2026-03-31 04:38:12.172678 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-31 04:38:12.172685 | orchestrator | Tuesday 31 March 2026 04:38:06 +0000 (0:00:00.134) 0:03:39.639 ********* 2026-03-31 04:38:12.172693 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-03-31 04:38:12.172700 | orchestrator | 2026-03-31 04:38:12.172707 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-31 04:38:12.172715 | orchestrator | Tuesday 31 March 2026 04:38:07 +0000 (0:00:00.244) 0:03:39.883 ********* 2026-03-31 04:38:12.172722 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:38:12.172729 | orchestrator | 2026-03-31 04:38:12.172737 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-31 04:38:12.172744 | orchestrator | Tuesday 31 March 2026 04:38:07 +0000 (0:00:00.742) 0:03:40.626 ********* 2026-03-31 04:38:12.172752 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-31 04:38:12.172759 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-31 04:38:12.172770 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-31 04:38:12.172778 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:12.172785 | orchestrator | 2026-03-31 04:38:12.172792 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-31 04:38:12.172800 | orchestrator | Tuesday 31 March 2026 04:38:08 +0000 (0:00:00.177) 0:03:40.803 ********* 2026-03-31 04:38:12.172807 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:12.172814 | orchestrator | 2026-03-31 04:38:12.172822 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-31 04:38:12.172829 | orchestrator | Tuesday 31 March 2026 04:38:08 +0000 (0:00:00.125) 0:03:40.929 ********* 2026-03-31 04:38:12.172836 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:12.172844 | orchestrator | 2026-03-31 04:38:12.172851 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-31 04:38:12.172858 | orchestrator | Tuesday 31 March 2026 04:38:08 +0000 (0:00:00.407) 0:03:41.337 ********* 2026-03-31 04:38:12.172866 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:12.172873 | orchestrator | 2026-03-31 04:38:12.172880 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-31 04:38:12.172888 | orchestrator | Tuesday 31 March 2026 04:38:08 +0000 (0:00:00.147) 0:03:41.484 ********* 2026-03-31 04:38:12.172967 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:12.172975 | orchestrator | 2026-03-31 04:38:12.172983 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-31 04:38:12.172996 | orchestrator | Tuesday 31 March 2026 04:38:08 +0000 (0:00:00.147) 0:03:41.632 ********* 2026-03-31 04:38:12.173004 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:12.173011 | orchestrator | 2026-03-31 04:38:12.173019 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-31 04:38:12.173026 | orchestrator | Tuesday 31 March 2026 04:38:09 +0000 (0:00:00.157) 0:03:41.790 ********* 2026-03-31 04:38:12.173033 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:38:12.173041 | orchestrator | 2026-03-31 04:38:12.173048 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-31 04:38:12.173055 | orchestrator | Tuesday 31 March 2026 04:38:10 +0000 (0:00:01.500) 0:03:43.290 ********* 2026-03-31 04:38:12.173063 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:38:12.173070 | orchestrator | 2026-03-31 04:38:12.173078 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-31 04:38:12.173085 | orchestrator | Tuesday 31 March 2026 04:38:10 +0000 (0:00:00.152) 0:03:43.442 ********* 2026-03-31 04:38:12.173092 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-03-31 04:38:12.173100 | orchestrator | 2026-03-31 04:38:12.173107 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-31 04:38:12.173114 | orchestrator | Tuesday 31 March 2026 04:38:10 +0000 (0:00:00.222) 0:03:43.665 ********* 2026-03-31 04:38:12.173122 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:12.173129 | orchestrator | 2026-03-31 04:38:12.173136 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-31 04:38:12.173144 | orchestrator | Tuesday 31 March 2026 04:38:11 +0000 (0:00:00.160) 0:03:43.826 ********* 2026-03-31 04:38:12.173151 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:12.173158 | orchestrator | 2026-03-31 04:38:12.173166 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-31 04:38:12.173173 | orchestrator | Tuesday 31 March 2026 04:38:11 +0000 (0:00:00.158) 0:03:43.984 ********* 2026-03-31 04:38:12.173180 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:12.173188 | orchestrator | 2026-03-31 04:38:12.173195 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-31 04:38:12.173203 | orchestrator | Tuesday 31 March 2026 04:38:11 +0000 (0:00:00.154) 0:03:44.139 ********* 2026-03-31 04:38:12.173210 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:12.173218 | orchestrator | 2026-03-31 04:38:12.173225 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-31 04:38:12.173232 | orchestrator | Tuesday 31 March 2026 04:38:11 +0000 (0:00:00.159) 0:03:44.299 ********* 2026-03-31 04:38:12.173239 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:12.173247 | orchestrator | 2026-03-31 04:38:12.173254 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-31 04:38:12.173261 | orchestrator | Tuesday 31 March 2026 04:38:11 +0000 (0:00:00.142) 0:03:44.441 ********* 2026-03-31 04:38:12.173269 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:12.173276 | orchestrator | 2026-03-31 04:38:12.173283 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-31 04:38:12.173296 | orchestrator | Tuesday 31 March 2026 04:38:12 +0000 (0:00:00.397) 0:03:44.838 ********* 2026-03-31 04:38:25.066636 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:25.066750 | orchestrator | 2026-03-31 04:38:25.066769 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-31 04:38:25.066784 | orchestrator | Tuesday 31 March 2026 04:38:12 +0000 (0:00:00.157) 0:03:44.995 ********* 2026-03-31 04:38:25.066803 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:25.066822 | orchestrator | 2026-03-31 04:38:25.066841 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-31 04:38:25.066861 | orchestrator | Tuesday 31 March 2026 04:38:12 +0000 (0:00:00.157) 0:03:45.153 ********* 2026-03-31 04:38:25.066880 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:38:25.066956 | orchestrator | 2026-03-31 04:38:25.066977 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-31 04:38:25.067025 | orchestrator | Tuesday 31 March 2026 04:38:12 +0000 (0:00:00.233) 0:03:45.386 ********* 2026-03-31 04:38:25.067045 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-03-31 04:38:25.067066 | orchestrator | 2026-03-31 04:38:25.067085 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-31 04:38:25.067103 | orchestrator | Tuesday 31 March 2026 04:38:12 +0000 (0:00:00.200) 0:03:45.587 ********* 2026-03-31 04:38:25.067118 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-03-31 04:38:25.067131 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-31 04:38:25.067157 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-31 04:38:25.067169 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-31 04:38:25.067180 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-31 04:38:25.067191 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-31 04:38:25.067201 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-31 04:38:25.067236 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-31 04:38:25.067282 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-31 04:38:25.067301 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-31 04:38:25.067319 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-31 04:38:25.067337 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-31 04:38:25.067355 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-31 04:38:25.067374 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-31 04:38:25.067394 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-03-31 04:38:25.067413 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-03-31 04:38:25.067433 | orchestrator | 2026-03-31 04:38:25.067445 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-31 04:38:25.067456 | orchestrator | Tuesday 31 March 2026 04:38:18 +0000 (0:00:05.612) 0:03:51.199 ********* 2026-03-31 04:38:25.067467 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:25.067477 | orchestrator | 2026-03-31 04:38:25.067488 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-31 04:38:25.067499 | orchestrator | Tuesday 31 March 2026 04:38:18 +0000 (0:00:00.143) 0:03:51.343 ********* 2026-03-31 04:38:25.067510 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:25.067521 | orchestrator | 2026-03-31 04:38:25.067532 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-31 04:38:25.067543 | orchestrator | Tuesday 31 March 2026 04:38:18 +0000 (0:00:00.133) 0:03:51.477 ********* 2026-03-31 04:38:25.067554 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:25.067565 | orchestrator | 2026-03-31 04:38:25.067576 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-31 04:38:25.067587 | orchestrator | Tuesday 31 March 2026 04:38:18 +0000 (0:00:00.140) 0:03:51.617 ********* 2026-03-31 04:38:25.067597 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:25.067608 | orchestrator | 2026-03-31 04:38:25.067619 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-31 04:38:25.067630 | orchestrator | Tuesday 31 March 2026 04:38:19 +0000 (0:00:00.143) 0:03:51.760 ********* 2026-03-31 04:38:25.067641 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:25.067652 | orchestrator | 2026-03-31 04:38:25.067663 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-31 04:38:25.067674 | orchestrator | Tuesday 31 March 2026 04:38:19 +0000 (0:00:00.131) 0:03:51.892 ********* 2026-03-31 04:38:25.067684 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:25.067695 | orchestrator | 2026-03-31 04:38:25.067706 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-31 04:38:25.067730 | orchestrator | Tuesday 31 March 2026 04:38:19 +0000 (0:00:00.398) 0:03:52.290 ********* 2026-03-31 04:38:25.067741 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:25.067752 | orchestrator | 2026-03-31 04:38:25.067763 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-31 04:38:25.067774 | orchestrator | Tuesday 31 March 2026 04:38:19 +0000 (0:00:00.137) 0:03:52.428 ********* 2026-03-31 04:38:25.067785 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:25.067796 | orchestrator | 2026-03-31 04:38:25.067807 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-31 04:38:25.067818 | orchestrator | Tuesday 31 March 2026 04:38:19 +0000 (0:00:00.126) 0:03:52.555 ********* 2026-03-31 04:38:25.067829 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:25.067840 | orchestrator | 2026-03-31 04:38:25.067851 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-31 04:38:25.067862 | orchestrator | Tuesday 31 March 2026 04:38:20 +0000 (0:00:00.136) 0:03:52.691 ********* 2026-03-31 04:38:25.067872 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:25.067883 | orchestrator | 2026-03-31 04:38:25.067894 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-31 04:38:25.067972 | orchestrator | Tuesday 31 March 2026 04:38:20 +0000 (0:00:00.127) 0:03:52.819 ********* 2026-03-31 04:38:25.067986 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:25.067997 | orchestrator | 2026-03-31 04:38:25.068008 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-31 04:38:25.068019 | orchestrator | Tuesday 31 March 2026 04:38:20 +0000 (0:00:00.118) 0:03:52.938 ********* 2026-03-31 04:38:25.068029 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:25.068040 | orchestrator | 2026-03-31 04:38:25.068051 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-31 04:38:25.068062 | orchestrator | Tuesday 31 March 2026 04:38:20 +0000 (0:00:00.180) 0:03:53.118 ********* 2026-03-31 04:38:25.068073 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:25.068084 | orchestrator | 2026-03-31 04:38:25.068094 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-31 04:38:25.068105 | orchestrator | Tuesday 31 March 2026 04:38:20 +0000 (0:00:00.225) 0:03:53.343 ********* 2026-03-31 04:38:25.068116 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:25.068127 | orchestrator | 2026-03-31 04:38:25.068138 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-31 04:38:25.068149 | orchestrator | Tuesday 31 March 2026 04:38:20 +0000 (0:00:00.133) 0:03:53.477 ********* 2026-03-31 04:38:25.068159 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:25.068170 | orchestrator | 2026-03-31 04:38:25.068181 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-31 04:38:25.068200 | orchestrator | Tuesday 31 March 2026 04:38:21 +0000 (0:00:00.247) 0:03:53.724 ********* 2026-03-31 04:38:25.068211 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:25.068222 | orchestrator | 2026-03-31 04:38:25.068233 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-31 04:38:25.068243 | orchestrator | Tuesday 31 March 2026 04:38:21 +0000 (0:00:00.137) 0:03:53.861 ********* 2026-03-31 04:38:25.068254 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:25.068265 | orchestrator | 2026-03-31 04:38:25.068277 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-31 04:38:25.068295 | orchestrator | Tuesday 31 March 2026 04:38:21 +0000 (0:00:00.122) 0:03:53.984 ********* 2026-03-31 04:38:25.068314 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:25.068333 | orchestrator | 2026-03-31 04:38:25.068352 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-31 04:38:25.068371 | orchestrator | Tuesday 31 March 2026 04:38:21 +0000 (0:00:00.124) 0:03:54.109 ********* 2026-03-31 04:38:25.068388 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:25.068418 | orchestrator | 2026-03-31 04:38:25.068434 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-31 04:38:25.068453 | orchestrator | Tuesday 31 March 2026 04:38:21 +0000 (0:00:00.136) 0:03:54.245 ********* 2026-03-31 04:38:25.068471 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:25.068490 | orchestrator | 2026-03-31 04:38:25.068508 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-31 04:38:25.068527 | orchestrator | Tuesday 31 March 2026 04:38:21 +0000 (0:00:00.411) 0:03:54.657 ********* 2026-03-31 04:38:25.068546 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:25.068565 | orchestrator | 2026-03-31 04:38:25.068585 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-31 04:38:25.068605 | orchestrator | Tuesday 31 March 2026 04:38:22 +0000 (0:00:00.142) 0:03:54.800 ********* 2026-03-31 04:38:25.068624 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-31 04:38:25.068643 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-31 04:38:25.068662 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-31 04:38:25.068681 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:25.068700 | orchestrator | 2026-03-31 04:38:25.068719 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-31 04:38:25.068739 | orchestrator | Tuesday 31 March 2026 04:38:22 +0000 (0:00:00.420) 0:03:55.221 ********* 2026-03-31 04:38:25.068758 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-31 04:38:25.068777 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-31 04:38:25.068797 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-31 04:38:25.068814 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:25.068835 | orchestrator | 2026-03-31 04:38:25.068855 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-31 04:38:25.068875 | orchestrator | Tuesday 31 March 2026 04:38:22 +0000 (0:00:00.401) 0:03:55.622 ********* 2026-03-31 04:38:25.068895 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-31 04:38:25.068907 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-31 04:38:25.068940 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-31 04:38:25.068951 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:25.068962 | orchestrator | 2026-03-31 04:38:25.068973 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-31 04:38:25.068984 | orchestrator | Tuesday 31 March 2026 04:38:23 +0000 (0:00:00.418) 0:03:56.041 ********* 2026-03-31 04:38:25.068995 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:25.069006 | orchestrator | 2026-03-31 04:38:25.069016 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-31 04:38:25.069027 | orchestrator | Tuesday 31 March 2026 04:38:23 +0000 (0:00:00.130) 0:03:56.171 ********* 2026-03-31 04:38:25.069038 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-31 04:38:25.069049 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:25.069060 | orchestrator | 2026-03-31 04:38:25.069070 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-31 04:38:25.069081 | orchestrator | Tuesday 31 March 2026 04:38:24 +0000 (0:00:00.697) 0:03:56.869 ********* 2026-03-31 04:38:25.069092 | orchestrator | changed: [testbed-node-0] 2026-03-31 04:38:25.069103 | orchestrator | 2026-03-31 04:38:25.069113 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-31 04:38:25.069136 | orchestrator | Tuesday 31 March 2026 04:38:25 +0000 (0:00:00.864) 0:03:57.733 ********* 2026-03-31 04:38:57.403066 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:38:57.403204 | orchestrator | 2026-03-31 04:38:57.403225 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-31 04:38:57.403239 | orchestrator | Tuesday 31 March 2026 04:38:25 +0000 (0:00:00.165) 0:03:57.899 ********* 2026-03-31 04:38:57.403250 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0 2026-03-31 04:38:57.403285 | orchestrator | 2026-03-31 04:38:57.403298 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-31 04:38:57.403309 | orchestrator | Tuesday 31 March 2026 04:38:25 +0000 (0:00:00.512) 0:03:58.412 ********* 2026-03-31 04:38:57.403320 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-03-31 04:38:57.403331 | orchestrator | 2026-03-31 04:38:57.403342 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-31 04:38:57.403353 | orchestrator | Tuesday 31 March 2026 04:38:27 +0000 (0:00:02.092) 0:04:00.504 ********* 2026-03-31 04:38:57.403364 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:57.403375 | orchestrator | 2026-03-31 04:38:57.403386 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-31 04:38:57.403397 | orchestrator | Tuesday 31 March 2026 04:38:27 +0000 (0:00:00.167) 0:04:00.671 ********* 2026-03-31 04:38:57.403409 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:38:57.403419 | orchestrator | 2026-03-31 04:38:57.403445 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-31 04:38:57.403457 | orchestrator | Tuesday 31 March 2026 04:38:28 +0000 (0:00:00.153) 0:04:00.825 ********* 2026-03-31 04:38:57.403468 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:38:57.403479 | orchestrator | 2026-03-31 04:38:57.403490 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-31 04:38:57.403501 | orchestrator | Tuesday 31 March 2026 04:38:28 +0000 (0:00:00.156) 0:04:00.981 ********* 2026-03-31 04:38:57.403512 | orchestrator | changed: [testbed-node-0] 2026-03-31 04:38:57.403523 | orchestrator | 2026-03-31 04:38:57.403534 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-31 04:38:57.403545 | orchestrator | Tuesday 31 March 2026 04:38:29 +0000 (0:00:01.008) 0:04:01.990 ********* 2026-03-31 04:38:57.403556 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:38:57.403567 | orchestrator | 2026-03-31 04:38:57.403577 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-31 04:38:57.403589 | orchestrator | Tuesday 31 March 2026 04:38:29 +0000 (0:00:00.591) 0:04:02.581 ********* 2026-03-31 04:38:57.403600 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:38:57.403612 | orchestrator | 2026-03-31 04:38:57.403623 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-31 04:38:57.403634 | orchestrator | Tuesday 31 March 2026 04:38:30 +0000 (0:00:00.491) 0:04:03.072 ********* 2026-03-31 04:38:57.403645 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:38:57.403656 | orchestrator | 2026-03-31 04:38:57.403667 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-31 04:38:57.403678 | orchestrator | Tuesday 31 March 2026 04:38:30 +0000 (0:00:00.470) 0:04:03.543 ********* 2026-03-31 04:38:57.403689 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:38:57.403700 | orchestrator | 2026-03-31 04:38:57.403711 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-31 04:38:57.403722 | orchestrator | Tuesday 31 March 2026 04:38:31 +0000 (0:00:00.737) 0:04:04.281 ********* 2026-03-31 04:38:57.403733 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:38:57.403744 | orchestrator | 2026-03-31 04:38:57.403755 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-31 04:38:57.403766 | orchestrator | Tuesday 31 March 2026 04:38:32 +0000 (0:00:00.681) 0:04:04.963 ********* 2026-03-31 04:38:57.403777 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-31 04:38:57.403789 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-31 04:38:57.403800 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-31 04:38:57.403811 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-03-31 04:38:57.403822 | orchestrator | 2026-03-31 04:38:57.403833 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-31 04:38:57.403843 | orchestrator | Tuesday 31 March 2026 04:38:35 +0000 (0:00:02.776) 0:04:07.739 ********* 2026-03-31 04:38:57.403862 | orchestrator | changed: [testbed-node-0] 2026-03-31 04:38:57.403873 | orchestrator | 2026-03-31 04:38:57.403884 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-31 04:38:57.403895 | orchestrator | Tuesday 31 March 2026 04:38:36 +0000 (0:00:01.151) 0:04:08.890 ********* 2026-03-31 04:38:57.403906 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:38:57.403917 | orchestrator | 2026-03-31 04:38:57.403928 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-31 04:38:57.403939 | orchestrator | Tuesday 31 March 2026 04:38:36 +0000 (0:00:00.155) 0:04:09.046 ********* 2026-03-31 04:38:57.403950 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:38:57.403984 | orchestrator | 2026-03-31 04:38:57.403998 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-31 04:38:57.404009 | orchestrator | Tuesday 31 March 2026 04:38:36 +0000 (0:00:00.444) 0:04:09.491 ********* 2026-03-31 04:38:57.404020 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:38:57.404031 | orchestrator | 2026-03-31 04:38:57.404042 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-31 04:38:57.404053 | orchestrator | Tuesday 31 March 2026 04:38:37 +0000 (0:00:00.730) 0:04:10.222 ********* 2026-03-31 04:38:57.404064 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:38:57.404075 | orchestrator | 2026-03-31 04:38:57.404094 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-31 04:38:57.404114 | orchestrator | Tuesday 31 March 2026 04:38:38 +0000 (0:00:00.477) 0:04:10.699 ********* 2026-03-31 04:38:57.404132 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:57.404152 | orchestrator | 2026-03-31 04:38:57.404172 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-31 04:38:57.404192 | orchestrator | Tuesday 31 March 2026 04:38:38 +0000 (0:00:00.136) 0:04:10.836 ********* 2026-03-31 04:38:57.404227 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0 2026-03-31 04:38:57.404239 | orchestrator | 2026-03-31 04:38:57.404250 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-31 04:38:57.404261 | orchestrator | Tuesday 31 March 2026 04:38:38 +0000 (0:00:00.220) 0:04:11.056 ********* 2026-03-31 04:38:57.404272 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:57.404283 | orchestrator | 2026-03-31 04:38:57.404294 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-31 04:38:57.404305 | orchestrator | Tuesday 31 March 2026 04:38:38 +0000 (0:00:00.135) 0:04:11.192 ********* 2026-03-31 04:38:57.404316 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:57.404327 | orchestrator | 2026-03-31 04:38:57.404338 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-31 04:38:57.404349 | orchestrator | Tuesday 31 March 2026 04:38:38 +0000 (0:00:00.148) 0:04:11.340 ********* 2026-03-31 04:38:57.404360 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0 2026-03-31 04:38:57.404371 | orchestrator | 2026-03-31 04:38:57.404382 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-31 04:38:57.404393 | orchestrator | Tuesday 31 March 2026 04:38:38 +0000 (0:00:00.232) 0:04:11.572 ********* 2026-03-31 04:38:57.404403 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:38:57.404414 | orchestrator | 2026-03-31 04:38:57.404432 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-31 04:38:57.404443 | orchestrator | Tuesday 31 March 2026 04:38:40 +0000 (0:00:01.634) 0:04:13.206 ********* 2026-03-31 04:38:57.404454 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:38:57.404465 | orchestrator | 2026-03-31 04:38:57.404476 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-31 04:38:57.404487 | orchestrator | Tuesday 31 March 2026 04:38:41 +0000 (0:00:00.906) 0:04:14.113 ********* 2026-03-31 04:38:57.404498 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:38:57.404509 | orchestrator | 2026-03-31 04:38:57.404520 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-31 04:38:57.404531 | orchestrator | Tuesday 31 March 2026 04:38:42 +0000 (0:00:01.397) 0:04:15.510 ********* 2026-03-31 04:38:57.404550 | orchestrator | changed: [testbed-node-0] 2026-03-31 04:38:57.404561 | orchestrator | 2026-03-31 04:38:57.404572 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-31 04:38:57.404583 | orchestrator | Tuesday 31 March 2026 04:38:45 +0000 (0:00:02.558) 0:04:18.069 ********* 2026-03-31 04:38:57.404594 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0 2026-03-31 04:38:57.404604 | orchestrator | 2026-03-31 04:38:57.404615 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-31 04:38:57.404626 | orchestrator | Tuesday 31 March 2026 04:38:45 +0000 (0:00:00.223) 0:04:18.293 ********* 2026-03-31 04:38:57.404637 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:38:57.404648 | orchestrator | 2026-03-31 04:38:57.404659 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-31 04:38:57.404670 | orchestrator | Tuesday 31 March 2026 04:38:46 +0000 (0:00:01.225) 0:04:19.518 ********* 2026-03-31 04:38:57.404681 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:38:57.404692 | orchestrator | 2026-03-31 04:38:57.404703 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-31 04:38:57.404714 | orchestrator | Tuesday 31 March 2026 04:38:48 +0000 (0:00:01.894) 0:04:21.413 ********* 2026-03-31 04:38:57.404725 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:38:57.404736 | orchestrator | 2026-03-31 04:38:57.404747 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-31 04:38:57.404757 | orchestrator | Tuesday 31 March 2026 04:38:48 +0000 (0:00:00.161) 0:04:21.574 ********* 2026-03-31 04:38:57.404770 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__844b88a37a697fc95420139c4fef42975660f41e'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-31 04:38:57.404784 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__844b88a37a697fc95420139c4fef42975660f41e'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-31 04:38:57.404802 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__844b88a37a697fc95420139c4fef42975660f41e'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-31 04:38:57.404821 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__844b88a37a697fc95420139c4fef42975660f41e'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-31 04:38:57.404853 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__844b88a37a697fc95420139c4fef42975660f41e'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-31 04:39:11.562548 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__844b88a37a697fc95420139c4fef42975660f41e'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__844b88a37a697fc95420139c4fef42975660f41e'}])  2026-03-31 04:39:11.562654 | orchestrator | 2026-03-31 04:39:11.562666 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-03-31 04:39:11.562674 | orchestrator | Tuesday 31 March 2026 04:38:57 +0000 (0:00:08.496) 0:04:30.070 ********* 2026-03-31 04:39:11.562680 | orchestrator | changed: [testbed-node-0] 2026-03-31 04:39:11.562688 | orchestrator | 2026-03-31 04:39:11.562706 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-31 04:39:11.562712 | orchestrator | Tuesday 31 March 2026 04:38:58 +0000 (0:00:01.472) 0:04:31.543 ********* 2026-03-31 04:39:11.562719 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-31 04:39:11.562726 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-31 04:39:11.562732 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-31 04:39:11.562738 | orchestrator | 2026-03-31 04:39:11.562744 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-31 04:39:11.562750 | orchestrator | Tuesday 31 March 2026 04:39:00 +0000 (0:00:01.197) 0:04:32.740 ********* 2026-03-31 04:39:11.562757 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-31 04:39:11.562763 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-31 04:39:11.562769 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-31 04:39:11.562775 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:39:11.562782 | orchestrator | 2026-03-31 04:39:11.562788 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-03-31 04:39:11.562794 | orchestrator | Tuesday 31 March 2026 04:39:00 +0000 (0:00:00.441) 0:04:33.182 ********* 2026-03-31 04:39:11.562800 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:39:11.562806 | orchestrator | 2026-03-31 04:39:11.562813 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-03-31 04:39:11.562820 | orchestrator | Tuesday 31 March 2026 04:39:00 +0000 (0:00:00.124) 0:04:33.306 ********* 2026-03-31 04:39:11.562826 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:39:11.562832 | orchestrator | 2026-03-31 04:39:11.562838 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-31 04:39:11.562845 | orchestrator | Tuesday 31 March 2026 04:39:02 +0000 (0:00:01.907) 0:04:35.214 ********* 2026-03-31 04:39:11.562851 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:39:11.562857 | orchestrator | 2026-03-31 04:39:11.562863 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-31 04:39:11.562869 | orchestrator | Tuesday 31 March 2026 04:39:02 +0000 (0:00:00.136) 0:04:35.350 ********* 2026-03-31 04:39:11.562875 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:39:11.562881 | orchestrator | 2026-03-31 04:39:11.562887 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-31 04:39:11.562894 | orchestrator | Tuesday 31 March 2026 04:39:02 +0000 (0:00:00.128) 0:04:35.479 ********* 2026-03-31 04:39:11.562900 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:39:11.562906 | orchestrator | 2026-03-31 04:39:11.562912 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-31 04:39:11.562918 | orchestrator | Tuesday 31 March 2026 04:39:02 +0000 (0:00:00.135) 0:04:35.614 ********* 2026-03-31 04:39:11.562924 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:39:11.562930 | orchestrator | 2026-03-31 04:39:11.562936 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-31 04:39:11.562943 | orchestrator | Tuesday 31 March 2026 04:39:03 +0000 (0:00:00.125) 0:04:35.739 ********* 2026-03-31 04:39:11.562949 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:39:11.562955 | orchestrator | 2026-03-31 04:39:11.562961 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-31 04:39:11.562967 | orchestrator | Tuesday 31 March 2026 04:39:03 +0000 (0:00:00.144) 0:04:35.884 ********* 2026-03-31 04:39:11.562973 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:39:11.562979 | orchestrator | 2026-03-31 04:39:11.563058 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-31 04:39:11.563071 | orchestrator | Tuesday 31 March 2026 04:39:03 +0000 (0:00:00.129) 0:04:36.014 ********* 2026-03-31 04:39:11.563077 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:39:11.563083 | orchestrator | 2026-03-31 04:39:11.563089 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-03-31 04:39:11.563095 | orchestrator | 2026-03-31 04:39:11.563101 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-03-31 04:39:11.563107 | orchestrator | Tuesday 31 March 2026 04:39:03 +0000 (0:00:00.244) 0:04:36.258 ********* 2026-03-31 04:39:11.563114 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:39:11.563120 | orchestrator | 2026-03-31 04:39:11.563127 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-03-31 04:39:11.563134 | orchestrator | Tuesday 31 March 2026 04:39:04 +0000 (0:00:00.480) 0:04:36.739 ********* 2026-03-31 04:39:11.563141 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:39:11.563147 | orchestrator | 2026-03-31 04:39:11.563154 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-03-31 04:39:11.563161 | orchestrator | Tuesday 31 March 2026 04:39:04 +0000 (0:00:00.150) 0:04:36.889 ********* 2026-03-31 04:39:11.563167 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:11.563175 | orchestrator | 2026-03-31 04:39:11.563181 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-03-31 04:39:11.563188 | orchestrator | Tuesday 31 March 2026 04:39:04 +0000 (0:00:00.129) 0:04:37.019 ********* 2026-03-31 04:39:11.563195 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:39:11.563202 | orchestrator | 2026-03-31 04:39:11.563220 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-31 04:39:11.563227 | orchestrator | Tuesday 31 March 2026 04:39:04 +0000 (0:00:00.429) 0:04:37.448 ********* 2026-03-31 04:39:11.563234 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-03-31 04:39:11.563241 | orchestrator | 2026-03-31 04:39:11.563248 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-31 04:39:11.563254 | orchestrator | Tuesday 31 March 2026 04:39:05 +0000 (0:00:00.254) 0:04:37.703 ********* 2026-03-31 04:39:11.563261 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:39:11.563268 | orchestrator | 2026-03-31 04:39:11.563274 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-31 04:39:11.563280 | orchestrator | Tuesday 31 March 2026 04:39:05 +0000 (0:00:00.459) 0:04:38.162 ********* 2026-03-31 04:39:11.563287 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:39:11.563294 | orchestrator | 2026-03-31 04:39:11.563300 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-31 04:39:11.563311 | orchestrator | Tuesday 31 March 2026 04:39:05 +0000 (0:00:00.144) 0:04:38.306 ********* 2026-03-31 04:39:11.563318 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:39:11.563325 | orchestrator | 2026-03-31 04:39:11.563331 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-31 04:39:11.563338 | orchestrator | Tuesday 31 March 2026 04:39:06 +0000 (0:00:00.481) 0:04:38.788 ********* 2026-03-31 04:39:11.563344 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:39:11.563351 | orchestrator | 2026-03-31 04:39:11.563357 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-31 04:39:11.563364 | orchestrator | Tuesday 31 March 2026 04:39:06 +0000 (0:00:00.151) 0:04:38.940 ********* 2026-03-31 04:39:11.563371 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:39:11.563377 | orchestrator | 2026-03-31 04:39:11.563384 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-31 04:39:11.563391 | orchestrator | Tuesday 31 March 2026 04:39:06 +0000 (0:00:00.139) 0:04:39.079 ********* 2026-03-31 04:39:11.563397 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:39:11.563404 | orchestrator | 2026-03-31 04:39:11.563411 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-31 04:39:11.563417 | orchestrator | Tuesday 31 March 2026 04:39:06 +0000 (0:00:00.153) 0:04:39.233 ********* 2026-03-31 04:39:11.563429 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:11.563436 | orchestrator | 2026-03-31 04:39:11.563443 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-31 04:39:11.563449 | orchestrator | Tuesday 31 March 2026 04:39:06 +0000 (0:00:00.140) 0:04:39.373 ********* 2026-03-31 04:39:11.563455 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:39:11.563462 | orchestrator | 2026-03-31 04:39:11.563469 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-31 04:39:11.563475 | orchestrator | Tuesday 31 March 2026 04:39:06 +0000 (0:00:00.141) 0:04:39.515 ********* 2026-03-31 04:39:11.563482 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:39:11.563488 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-31 04:39:11.563494 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:39:11.563500 | orchestrator | 2026-03-31 04:39:11.563505 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-31 04:39:11.563511 | orchestrator | Tuesday 31 March 2026 04:39:07 +0000 (0:00:00.924) 0:04:40.440 ********* 2026-03-31 04:39:11.563517 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:39:11.563523 | orchestrator | 2026-03-31 04:39:11.563529 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-31 04:39:11.563535 | orchestrator | Tuesday 31 March 2026 04:39:08 +0000 (0:00:00.266) 0:04:40.706 ********* 2026-03-31 04:39:11.563540 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:39:11.563546 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-31 04:39:11.563552 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:39:11.563558 | orchestrator | 2026-03-31 04:39:11.563564 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-31 04:39:11.563569 | orchestrator | Tuesday 31 March 2026 04:39:10 +0000 (0:00:02.494) 0:04:43.200 ********* 2026-03-31 04:39:11.563575 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-31 04:39:11.563581 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-31 04:39:11.563587 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-31 04:39:11.563593 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:11.563598 | orchestrator | 2026-03-31 04:39:11.563604 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-31 04:39:11.563610 | orchestrator | Tuesday 31 March 2026 04:39:10 +0000 (0:00:00.435) 0:04:43.636 ********* 2026-03-31 04:39:11.563618 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-31 04:39:11.563626 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-31 04:39:11.563632 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-31 04:39:11.563638 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:11.563644 | orchestrator | 2026-03-31 04:39:11.563653 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-31 04:39:15.693165 | orchestrator | Tuesday 31 March 2026 04:39:11 +0000 (0:00:00.591) 0:04:44.227 ********* 2026-03-31 04:39:15.693281 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 04:39:15.693343 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 04:39:15.693358 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 04:39:15.693370 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:15.693383 | orchestrator | 2026-03-31 04:39:15.693395 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-31 04:39:15.693407 | orchestrator | Tuesday 31 March 2026 04:39:11 +0000 (0:00:00.197) 0:04:44.425 ********* 2026-03-31 04:39:15.693420 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '2a470704af4f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-31 04:39:08.869628', 'end': '2026-03-31 04:39:08.921682', 'delta': '0:00:00.052054', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2a470704af4f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-31 04:39:15.693435 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '1ea1d727f3e0', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-31 04:39:09.485739', 'end': '2026-03-31 04:39:09.533992', 'delta': '0:00:00.048253', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1ea1d727f3e0'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-31 04:39:15.693447 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'df3f30930c20', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-31 04:39:10.027455', 'end': '2026-03-31 04:39:10.075252', 'delta': '0:00:00.047797', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['df3f30930c20'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-31 04:39:15.693459 | orchestrator | 2026-03-31 04:39:15.693471 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-31 04:39:15.693483 | orchestrator | Tuesday 31 March 2026 04:39:11 +0000 (0:00:00.194) 0:04:44.620 ********* 2026-03-31 04:39:15.693494 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:39:15.693514 | orchestrator | 2026-03-31 04:39:15.693544 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-31 04:39:15.693556 | orchestrator | Tuesday 31 March 2026 04:39:12 +0000 (0:00:00.255) 0:04:44.875 ********* 2026-03-31 04:39:15.693567 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:15.693578 | orchestrator | 2026-03-31 04:39:15.693590 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-31 04:39:15.693603 | orchestrator | Tuesday 31 March 2026 04:39:12 +0000 (0:00:00.313) 0:04:45.188 ********* 2026-03-31 04:39:15.693616 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:39:15.693629 | orchestrator | 2026-03-31 04:39:15.693675 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-31 04:39:15.693688 | orchestrator | Tuesday 31 March 2026 04:39:12 +0000 (0:00:00.144) 0:04:45.333 ********* 2026-03-31 04:39:15.693701 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] 2026-03-31 04:39:15.693713 | orchestrator | 2026-03-31 04:39:15.693727 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-31 04:39:15.693745 | orchestrator | Tuesday 31 March 2026 04:39:13 +0000 (0:00:01.024) 0:04:46.358 ********* 2026-03-31 04:39:15.693758 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:39:15.693771 | orchestrator | 2026-03-31 04:39:15.693783 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-31 04:39:15.693796 | orchestrator | Tuesday 31 March 2026 04:39:13 +0000 (0:00:00.175) 0:04:46.534 ********* 2026-03-31 04:39:15.693809 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:15.693822 | orchestrator | 2026-03-31 04:39:15.693834 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-31 04:39:15.693847 | orchestrator | Tuesday 31 March 2026 04:39:13 +0000 (0:00:00.138) 0:04:46.672 ********* 2026-03-31 04:39:15.693859 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:15.693871 | orchestrator | 2026-03-31 04:39:15.693884 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-31 04:39:15.693896 | orchestrator | Tuesday 31 March 2026 04:39:14 +0000 (0:00:00.237) 0:04:46.909 ********* 2026-03-31 04:39:15.693909 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:15.693922 | orchestrator | 2026-03-31 04:39:15.693935 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-31 04:39:15.693947 | orchestrator | Tuesday 31 March 2026 04:39:14 +0000 (0:00:00.123) 0:04:47.033 ********* 2026-03-31 04:39:15.693960 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:15.693973 | orchestrator | 2026-03-31 04:39:15.693985 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-31 04:39:15.694087 | orchestrator | Tuesday 31 March 2026 04:39:14 +0000 (0:00:00.143) 0:04:47.177 ********* 2026-03-31 04:39:15.694100 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:15.694111 | orchestrator | 2026-03-31 04:39:15.694122 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-31 04:39:15.694133 | orchestrator | Tuesday 31 March 2026 04:39:14 +0000 (0:00:00.407) 0:04:47.585 ********* 2026-03-31 04:39:15.694144 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:15.694165 | orchestrator | 2026-03-31 04:39:15.694176 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-31 04:39:15.694187 | orchestrator | Tuesday 31 March 2026 04:39:15 +0000 (0:00:00.163) 0:04:47.749 ********* 2026-03-31 04:39:15.694198 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:15.694210 | orchestrator | 2026-03-31 04:39:15.694221 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-31 04:39:15.694232 | orchestrator | Tuesday 31 March 2026 04:39:15 +0000 (0:00:00.131) 0:04:47.880 ********* 2026-03-31 04:39:15.694243 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:15.694254 | orchestrator | 2026-03-31 04:39:15.694265 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-31 04:39:15.694277 | orchestrator | Tuesday 31 March 2026 04:39:15 +0000 (0:00:00.139) 0:04:48.019 ********* 2026-03-31 04:39:15.694297 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:15.694308 | orchestrator | 2026-03-31 04:39:15.694319 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-31 04:39:15.694330 | orchestrator | Tuesday 31 March 2026 04:39:15 +0000 (0:00:00.117) 0:04:48.137 ********* 2026-03-31 04:39:15.694342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:39:15.694354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:39:15.694366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:39:15.694388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-31-01-38-51-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-31 04:39:15.910821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:39:15.910921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:39:15.910937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:39:15.910954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e', 'scsi-SQEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '47a85f4c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part16', 'scsi-SQEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part14', 'scsi-SQEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part15', 'scsi-SQEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part1', 'scsi-SQEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-31 04:39:15.911054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:39:15.911089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:39:15.911109 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:15.911122 | orchestrator | 2026-03-31 04:39:15.911134 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-31 04:39:15.911147 | orchestrator | Tuesday 31 March 2026 04:39:15 +0000 (0:00:00.226) 0:04:48.364 ********* 2026-03-31 04:39:15.911160 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:39:15.911173 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:39:15.911193 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:39:15.911206 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-31-01-38-51-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:39:15.911218 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:39:15.911237 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:39:21.743621 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:39:21.743714 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e', 'scsi-SQEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '47a85f4c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part16', 'scsi-SQEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part14', 'scsi-SQEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part15', 'scsi-SQEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part1', 'scsi-SQEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:39:21.743738 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:39:21.743758 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:39:21.743763 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:21.743769 | orchestrator | 2026-03-31 04:39:21.743774 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-31 04:39:21.743780 | orchestrator | Tuesday 31 March 2026 04:39:15 +0000 (0:00:00.219) 0:04:48.583 ********* 2026-03-31 04:39:21.743784 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:39:21.743789 | orchestrator | 2026-03-31 04:39:21.743793 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-31 04:39:21.743797 | orchestrator | Tuesday 31 March 2026 04:39:16 +0000 (0:00:00.484) 0:04:49.068 ********* 2026-03-31 04:39:21.743801 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:39:21.743804 | orchestrator | 2026-03-31 04:39:21.743808 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-31 04:39:21.743812 | orchestrator | Tuesday 31 March 2026 04:39:16 +0000 (0:00:00.130) 0:04:49.198 ********* 2026-03-31 04:39:21.743816 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:39:21.743820 | orchestrator | 2026-03-31 04:39:21.743828 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-31 04:39:21.743832 | orchestrator | Tuesday 31 March 2026 04:39:16 +0000 (0:00:00.457) 0:04:49.656 ********* 2026-03-31 04:39:21.743836 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:21.743840 | orchestrator | 2026-03-31 04:39:21.743844 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-31 04:39:21.743847 | orchestrator | Tuesday 31 March 2026 04:39:17 +0000 (0:00:00.137) 0:04:49.793 ********* 2026-03-31 04:39:21.743851 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:21.743855 | orchestrator | 2026-03-31 04:39:21.743859 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-31 04:39:21.743863 | orchestrator | Tuesday 31 March 2026 04:39:17 +0000 (0:00:00.238) 0:04:50.032 ********* 2026-03-31 04:39:21.743867 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:21.743871 | orchestrator | 2026-03-31 04:39:21.743875 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-31 04:39:21.743878 | orchestrator | Tuesday 31 March 2026 04:39:17 +0000 (0:00:00.142) 0:04:50.175 ********* 2026-03-31 04:39:21.743882 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-31 04:39:21.743886 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-31 04:39:21.743890 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-31 04:39:21.743894 | orchestrator | 2026-03-31 04:39:21.743898 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-31 04:39:21.743902 | orchestrator | Tuesday 31 March 2026 04:39:18 +0000 (0:00:01.261) 0:04:51.437 ********* 2026-03-31 04:39:21.743906 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-31 04:39:21.743910 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-31 04:39:21.743914 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-31 04:39:21.743918 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:21.743921 | orchestrator | 2026-03-31 04:39:21.743925 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-31 04:39:21.743929 | orchestrator | Tuesday 31 March 2026 04:39:18 +0000 (0:00:00.181) 0:04:51.618 ********* 2026-03-31 04:39:21.743933 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:21.743937 | orchestrator | 2026-03-31 04:39:21.743941 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-31 04:39:21.743945 | orchestrator | Tuesday 31 March 2026 04:39:19 +0000 (0:00:00.149) 0:04:51.768 ********* 2026-03-31 04:39:21.743949 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:39:21.743953 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-31 04:39:21.743957 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:39:21.743961 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-31 04:39:21.743965 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-31 04:39:21.743968 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-31 04:39:21.743972 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-31 04:39:21.743976 | orchestrator | 2026-03-31 04:39:21.743980 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-31 04:39:21.743984 | orchestrator | Tuesday 31 March 2026 04:39:19 +0000 (0:00:00.807) 0:04:52.575 ********* 2026-03-31 04:39:21.743988 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:39:21.743992 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-31 04:39:21.744039 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:39:21.744044 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-31 04:39:21.744051 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-31 04:39:21.744055 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-31 04:39:21.744059 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-31 04:39:21.744063 | orchestrator | 2026-03-31 04:39:21.744066 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-03-31 04:39:21.744070 | orchestrator | Tuesday 31 March 2026 04:39:21 +0000 (0:00:01.610) 0:04:54.186 ********* 2026-03-31 04:39:21.744074 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:21.744078 | orchestrator | 2026-03-31 04:39:21.744082 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-03-31 04:39:21.744096 | orchestrator | Tuesday 31 March 2026 04:39:21 +0000 (0:00:00.231) 0:04:54.418 ********* 2026-03-31 04:39:34.469144 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:34.469288 | orchestrator | 2026-03-31 04:39:34.469316 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-03-31 04:39:34.469338 | orchestrator | Tuesday 31 March 2026 04:39:21 +0000 (0:00:00.220) 0:04:54.638 ********* 2026-03-31 04:39:34.469358 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:34.469376 | orchestrator | 2026-03-31 04:39:34.469396 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-03-31 04:39:34.469416 | orchestrator | Tuesday 31 March 2026 04:39:22 +0000 (0:00:00.130) 0:04:54.768 ********* 2026-03-31 04:39:34.469435 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:34.469455 | orchestrator | 2026-03-31 04:39:34.469474 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-03-31 04:39:34.469493 | orchestrator | Tuesday 31 March 2026 04:39:22 +0000 (0:00:00.228) 0:04:54.996 ********* 2026-03-31 04:39:34.469512 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:34.469531 | orchestrator | 2026-03-31 04:39:34.469550 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-03-31 04:39:34.469568 | orchestrator | Tuesday 31 March 2026 04:39:22 +0000 (0:00:00.143) 0:04:55.140 ********* 2026-03-31 04:39:34.469588 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-31 04:39:34.469610 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-31 04:39:34.469630 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-31 04:39:34.469648 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:34.469668 | orchestrator | 2026-03-31 04:39:34.469687 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-03-31 04:39:34.469707 | orchestrator | Tuesday 31 March 2026 04:39:23 +0000 (0:00:00.697) 0:04:55.837 ********* 2026-03-31 04:39:34.469726 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-03-31 04:39:34.469738 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-03-31 04:39:34.469749 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-03-31 04:39:34.469761 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-03-31 04:39:34.469771 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-03-31 04:39:34.469783 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-03-31 04:39:34.469793 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:34.469804 | orchestrator | 2026-03-31 04:39:34.469815 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-03-31 04:39:34.469826 | orchestrator | Tuesday 31 March 2026 04:39:24 +0000 (0:00:00.956) 0:04:56.793 ********* 2026-03-31 04:39:34.469838 | orchestrator | changed: [testbed-node-1] => (item=testbed-node-1) 2026-03-31 04:39:34.469849 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-31 04:39:34.469860 | orchestrator | 2026-03-31 04:39:34.469871 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-03-31 04:39:34.469907 | orchestrator | Tuesday 31 March 2026 04:39:26 +0000 (0:00:02.617) 0:04:59.410 ********* 2026-03-31 04:39:34.469918 | orchestrator | changed: [testbed-node-1] 2026-03-31 04:39:34.469929 | orchestrator | 2026-03-31 04:39:34.469940 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-31 04:39:34.469951 | orchestrator | Tuesday 31 March 2026 04:39:28 +0000 (0:00:01.370) 0:05:00.781 ********* 2026-03-31 04:39:34.469962 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-03-31 04:39:34.469974 | orchestrator | 2026-03-31 04:39:34.469985 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-31 04:39:34.469996 | orchestrator | Tuesday 31 March 2026 04:39:28 +0000 (0:00:00.219) 0:05:01.001 ********* 2026-03-31 04:39:34.470007 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-03-31 04:39:34.470097 | orchestrator | 2026-03-31 04:39:34.470109 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-31 04:39:34.470120 | orchestrator | Tuesday 31 March 2026 04:39:28 +0000 (0:00:00.224) 0:05:01.225 ********* 2026-03-31 04:39:34.470131 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:39:34.470142 | orchestrator | 2026-03-31 04:39:34.470153 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-31 04:39:34.470164 | orchestrator | Tuesday 31 March 2026 04:39:29 +0000 (0:00:00.519) 0:05:01.745 ********* 2026-03-31 04:39:34.470175 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:34.470186 | orchestrator | 2026-03-31 04:39:34.470197 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-31 04:39:34.470208 | orchestrator | Tuesday 31 March 2026 04:39:29 +0000 (0:00:00.137) 0:05:01.882 ********* 2026-03-31 04:39:34.470219 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:34.470230 | orchestrator | 2026-03-31 04:39:34.470241 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-31 04:39:34.470252 | orchestrator | Tuesday 31 March 2026 04:39:29 +0000 (0:00:00.129) 0:05:02.011 ********* 2026-03-31 04:39:34.470262 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:34.470274 | orchestrator | 2026-03-31 04:39:34.470285 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-31 04:39:34.470295 | orchestrator | Tuesday 31 March 2026 04:39:29 +0000 (0:00:00.133) 0:05:02.145 ********* 2026-03-31 04:39:34.470306 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:39:34.470317 | orchestrator | 2026-03-31 04:39:34.470328 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-31 04:39:34.470339 | orchestrator | Tuesday 31 March 2026 04:39:29 +0000 (0:00:00.522) 0:05:02.668 ********* 2026-03-31 04:39:34.470350 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:34.470361 | orchestrator | 2026-03-31 04:39:34.470372 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-31 04:39:34.470418 | orchestrator | Tuesday 31 March 2026 04:39:30 +0000 (0:00:00.157) 0:05:02.825 ********* 2026-03-31 04:39:34.470431 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:34.470442 | orchestrator | 2026-03-31 04:39:34.470453 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-31 04:39:34.470464 | orchestrator | Tuesday 31 March 2026 04:39:30 +0000 (0:00:00.120) 0:05:02.946 ********* 2026-03-31 04:39:34.470475 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:39:34.470486 | orchestrator | 2026-03-31 04:39:34.470497 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-31 04:39:34.470508 | orchestrator | Tuesday 31 March 2026 04:39:31 +0000 (0:00:00.819) 0:05:03.765 ********* 2026-03-31 04:39:34.470519 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:39:34.470530 | orchestrator | 2026-03-31 04:39:34.470541 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-31 04:39:34.470552 | orchestrator | Tuesday 31 March 2026 04:39:31 +0000 (0:00:00.537) 0:05:04.303 ********* 2026-03-31 04:39:34.470562 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:34.470582 | orchestrator | 2026-03-31 04:39:34.470593 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-31 04:39:34.470605 | orchestrator | Tuesday 31 March 2026 04:39:31 +0000 (0:00:00.120) 0:05:04.423 ********* 2026-03-31 04:39:34.470615 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:39:34.470626 | orchestrator | 2026-03-31 04:39:34.470637 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-31 04:39:34.470648 | orchestrator | Tuesday 31 March 2026 04:39:31 +0000 (0:00:00.152) 0:05:04.575 ********* 2026-03-31 04:39:34.470659 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:34.470670 | orchestrator | 2026-03-31 04:39:34.470681 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-31 04:39:34.470692 | orchestrator | Tuesday 31 March 2026 04:39:32 +0000 (0:00:00.136) 0:05:04.712 ********* 2026-03-31 04:39:34.470703 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:34.470713 | orchestrator | 2026-03-31 04:39:34.470724 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-31 04:39:34.470735 | orchestrator | Tuesday 31 March 2026 04:39:32 +0000 (0:00:00.130) 0:05:04.842 ********* 2026-03-31 04:39:34.470746 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:34.470757 | orchestrator | 2026-03-31 04:39:34.470768 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-31 04:39:34.470779 | orchestrator | Tuesday 31 March 2026 04:39:32 +0000 (0:00:00.136) 0:05:04.978 ********* 2026-03-31 04:39:34.470790 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:34.470801 | orchestrator | 2026-03-31 04:39:34.470812 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-31 04:39:34.470823 | orchestrator | Tuesday 31 March 2026 04:39:32 +0000 (0:00:00.127) 0:05:05.106 ********* 2026-03-31 04:39:34.470833 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:34.470844 | orchestrator | 2026-03-31 04:39:34.470855 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-31 04:39:34.470866 | orchestrator | Tuesday 31 March 2026 04:39:32 +0000 (0:00:00.138) 0:05:05.245 ********* 2026-03-31 04:39:34.470877 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:39:34.470888 | orchestrator | 2026-03-31 04:39:34.470899 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-31 04:39:34.470910 | orchestrator | Tuesday 31 March 2026 04:39:32 +0000 (0:00:00.146) 0:05:05.391 ********* 2026-03-31 04:39:34.470921 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:39:34.470931 | orchestrator | 2026-03-31 04:39:34.470942 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-31 04:39:34.470954 | orchestrator | Tuesday 31 March 2026 04:39:32 +0000 (0:00:00.156) 0:05:05.548 ********* 2026-03-31 04:39:34.470973 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:39:34.470993 | orchestrator | 2026-03-31 04:39:34.471011 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-31 04:39:34.471052 | orchestrator | Tuesday 31 March 2026 04:39:33 +0000 (0:00:00.276) 0:05:05.825 ********* 2026-03-31 04:39:34.471072 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:34.471083 | orchestrator | 2026-03-31 04:39:34.471094 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-31 04:39:34.471105 | orchestrator | Tuesday 31 March 2026 04:39:33 +0000 (0:00:00.400) 0:05:06.225 ********* 2026-03-31 04:39:34.471116 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:34.471127 | orchestrator | 2026-03-31 04:39:34.471138 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-31 04:39:34.471148 | orchestrator | Tuesday 31 March 2026 04:39:33 +0000 (0:00:00.128) 0:05:06.354 ********* 2026-03-31 04:39:34.471159 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:34.471170 | orchestrator | 2026-03-31 04:39:34.471181 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-31 04:39:34.471192 | orchestrator | Tuesday 31 March 2026 04:39:33 +0000 (0:00:00.141) 0:05:06.495 ********* 2026-03-31 04:39:34.471202 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:34.471221 | orchestrator | 2026-03-31 04:39:34.471232 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-31 04:39:34.471243 | orchestrator | Tuesday 31 March 2026 04:39:33 +0000 (0:00:00.136) 0:05:06.632 ********* 2026-03-31 04:39:34.471254 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:34.471265 | orchestrator | 2026-03-31 04:39:34.471276 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-31 04:39:34.471287 | orchestrator | Tuesday 31 March 2026 04:39:34 +0000 (0:00:00.138) 0:05:06.771 ********* 2026-03-31 04:39:34.471297 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:34.471308 | orchestrator | 2026-03-31 04:39:34.471319 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-31 04:39:34.471330 | orchestrator | Tuesday 31 March 2026 04:39:34 +0000 (0:00:00.124) 0:05:06.896 ********* 2026-03-31 04:39:34.471341 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:34.471352 | orchestrator | 2026-03-31 04:39:34.471363 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-31 04:39:34.471374 | orchestrator | Tuesday 31 March 2026 04:39:34 +0000 (0:00:00.127) 0:05:07.023 ********* 2026-03-31 04:39:34.471385 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:34.471396 | orchestrator | 2026-03-31 04:39:34.471421 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-31 04:39:51.873691 | orchestrator | Tuesday 31 March 2026 04:39:34 +0000 (0:00:00.113) 0:05:07.137 ********* 2026-03-31 04:39:51.873792 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:51.873806 | orchestrator | 2026-03-31 04:39:51.873816 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-31 04:39:51.873825 | orchestrator | Tuesday 31 March 2026 04:39:34 +0000 (0:00:00.133) 0:05:07.271 ********* 2026-03-31 04:39:51.873833 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:51.873841 | orchestrator | 2026-03-31 04:39:51.873849 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-31 04:39:51.873858 | orchestrator | Tuesday 31 March 2026 04:39:34 +0000 (0:00:00.144) 0:05:07.415 ********* 2026-03-31 04:39:51.873866 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:51.873874 | orchestrator | 2026-03-31 04:39:51.873882 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-31 04:39:51.873890 | orchestrator | Tuesday 31 March 2026 04:39:34 +0000 (0:00:00.129) 0:05:07.545 ********* 2026-03-31 04:39:51.873898 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:51.873906 | orchestrator | 2026-03-31 04:39:51.873914 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-31 04:39:51.873922 | orchestrator | Tuesday 31 March 2026 04:39:35 +0000 (0:00:00.199) 0:05:07.744 ********* 2026-03-31 04:39:51.873930 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:39:51.873939 | orchestrator | 2026-03-31 04:39:51.873947 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-31 04:39:51.873955 | orchestrator | Tuesday 31 March 2026 04:39:36 +0000 (0:00:00.970) 0:05:08.715 ********* 2026-03-31 04:39:51.873963 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:39:51.873971 | orchestrator | 2026-03-31 04:39:51.873979 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-31 04:39:51.873987 | orchestrator | Tuesday 31 March 2026 04:39:38 +0000 (0:00:02.061) 0:05:10.777 ********* 2026-03-31 04:39:51.873995 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-03-31 04:39:51.874003 | orchestrator | 2026-03-31 04:39:51.874011 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-31 04:39:51.874110 | orchestrator | Tuesday 31 March 2026 04:39:38 +0000 (0:00:00.207) 0:05:10.984 ********* 2026-03-31 04:39:51.874120 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:51.874128 | orchestrator | 2026-03-31 04:39:51.874136 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-31 04:39:51.874144 | orchestrator | Tuesday 31 March 2026 04:39:38 +0000 (0:00:00.140) 0:05:11.124 ********* 2026-03-31 04:39:51.874172 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:51.874180 | orchestrator | 2026-03-31 04:39:51.874188 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-31 04:39:51.874196 | orchestrator | Tuesday 31 March 2026 04:39:38 +0000 (0:00:00.120) 0:05:11.245 ********* 2026-03-31 04:39:51.874204 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-31 04:39:51.874212 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-31 04:39:51.874221 | orchestrator | 2026-03-31 04:39:51.874229 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-31 04:39:51.874237 | orchestrator | Tuesday 31 March 2026 04:39:39 +0000 (0:00:00.836) 0:05:12.081 ********* 2026-03-31 04:39:51.874245 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:39:51.874253 | orchestrator | 2026-03-31 04:39:51.874261 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-31 04:39:51.874269 | orchestrator | Tuesday 31 March 2026 04:39:39 +0000 (0:00:00.438) 0:05:12.520 ********* 2026-03-31 04:39:51.874277 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:51.874284 | orchestrator | 2026-03-31 04:39:51.874292 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-31 04:39:51.874300 | orchestrator | Tuesday 31 March 2026 04:39:39 +0000 (0:00:00.148) 0:05:12.668 ********* 2026-03-31 04:39:51.874308 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:51.874316 | orchestrator | 2026-03-31 04:39:51.874324 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-31 04:39:51.874332 | orchestrator | Tuesday 31 March 2026 04:39:40 +0000 (0:00:00.158) 0:05:12.826 ********* 2026-03-31 04:39:51.874345 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:51.874359 | orchestrator | 2026-03-31 04:39:51.874372 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-31 04:39:51.874386 | orchestrator | Tuesday 31 March 2026 04:39:40 +0000 (0:00:00.140) 0:05:12.966 ********* 2026-03-31 04:39:51.874401 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-03-31 04:39:51.874414 | orchestrator | 2026-03-31 04:39:51.874428 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-31 04:39:51.874436 | orchestrator | Tuesday 31 March 2026 04:39:40 +0000 (0:00:00.219) 0:05:13.186 ********* 2026-03-31 04:39:51.874444 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:39:51.874452 | orchestrator | 2026-03-31 04:39:51.874465 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-31 04:39:51.874478 | orchestrator | Tuesday 31 March 2026 04:39:41 +0000 (0:00:00.702) 0:05:13.889 ********* 2026-03-31 04:39:51.874490 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-31 04:39:51.874503 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-31 04:39:51.874517 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-31 04:39:51.874530 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:51.874538 | orchestrator | 2026-03-31 04:39:51.874546 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-31 04:39:51.874555 | orchestrator | Tuesday 31 March 2026 04:39:41 +0000 (0:00:00.371) 0:05:14.260 ********* 2026-03-31 04:39:51.874575 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:51.874583 | orchestrator | 2026-03-31 04:39:51.874608 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-31 04:39:51.874617 | orchestrator | Tuesday 31 March 2026 04:39:41 +0000 (0:00:00.138) 0:05:14.398 ********* 2026-03-31 04:39:51.874625 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:51.874633 | orchestrator | 2026-03-31 04:39:51.874641 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-31 04:39:51.874649 | orchestrator | Tuesday 31 March 2026 04:39:41 +0000 (0:00:00.160) 0:05:14.558 ********* 2026-03-31 04:39:51.874665 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:51.874673 | orchestrator | 2026-03-31 04:39:51.874681 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-31 04:39:51.874689 | orchestrator | Tuesday 31 March 2026 04:39:42 +0000 (0:00:00.147) 0:05:14.706 ********* 2026-03-31 04:39:51.874697 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:51.874705 | orchestrator | 2026-03-31 04:39:51.874713 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-31 04:39:51.874721 | orchestrator | Tuesday 31 March 2026 04:39:42 +0000 (0:00:00.151) 0:05:14.859 ********* 2026-03-31 04:39:51.874728 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:51.874736 | orchestrator | 2026-03-31 04:39:51.874744 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-31 04:39:51.874752 | orchestrator | Tuesday 31 March 2026 04:39:42 +0000 (0:00:00.160) 0:05:15.020 ********* 2026-03-31 04:39:51.874760 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:39:51.874768 | orchestrator | 2026-03-31 04:39:51.874776 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-31 04:39:51.874784 | orchestrator | Tuesday 31 March 2026 04:39:43 +0000 (0:00:01.496) 0:05:16.516 ********* 2026-03-31 04:39:51.874791 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:39:51.874799 | orchestrator | 2026-03-31 04:39:51.874807 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-31 04:39:51.874815 | orchestrator | Tuesday 31 March 2026 04:39:43 +0000 (0:00:00.141) 0:05:16.658 ********* 2026-03-31 04:39:51.874823 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-03-31 04:39:51.874831 | orchestrator | 2026-03-31 04:39:51.874839 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-31 04:39:51.874847 | orchestrator | Tuesday 31 March 2026 04:39:44 +0000 (0:00:00.233) 0:05:16.891 ********* 2026-03-31 04:39:51.874855 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:51.874862 | orchestrator | 2026-03-31 04:39:51.874870 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-31 04:39:51.874878 | orchestrator | Tuesday 31 March 2026 04:39:44 +0000 (0:00:00.140) 0:05:17.032 ********* 2026-03-31 04:39:51.874886 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:51.874894 | orchestrator | 2026-03-31 04:39:51.874902 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-31 04:39:51.874910 | orchestrator | Tuesday 31 March 2026 04:39:44 +0000 (0:00:00.130) 0:05:17.163 ********* 2026-03-31 04:39:51.874918 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:51.874926 | orchestrator | 2026-03-31 04:39:51.874934 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-31 04:39:51.874942 | orchestrator | Tuesday 31 March 2026 04:39:44 +0000 (0:00:00.149) 0:05:17.312 ********* 2026-03-31 04:39:51.874949 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:51.874957 | orchestrator | 2026-03-31 04:39:51.874965 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-31 04:39:51.874973 | orchestrator | Tuesday 31 March 2026 04:39:45 +0000 (0:00:00.398) 0:05:17.711 ********* 2026-03-31 04:39:51.874981 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:51.874989 | orchestrator | 2026-03-31 04:39:51.874997 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-31 04:39:51.875005 | orchestrator | Tuesday 31 March 2026 04:39:45 +0000 (0:00:00.145) 0:05:17.857 ********* 2026-03-31 04:39:51.875013 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:51.875020 | orchestrator | 2026-03-31 04:39:51.875028 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-31 04:39:51.875078 | orchestrator | Tuesday 31 March 2026 04:39:45 +0000 (0:00:00.164) 0:05:18.021 ********* 2026-03-31 04:39:51.875088 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:51.875096 | orchestrator | 2026-03-31 04:39:51.875104 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-31 04:39:51.875112 | orchestrator | Tuesday 31 March 2026 04:39:45 +0000 (0:00:00.153) 0:05:18.175 ********* 2026-03-31 04:39:51.875125 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:39:51.875133 | orchestrator | 2026-03-31 04:39:51.875140 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-31 04:39:51.875148 | orchestrator | Tuesday 31 March 2026 04:39:45 +0000 (0:00:00.152) 0:05:18.328 ********* 2026-03-31 04:39:51.875156 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:39:51.875164 | orchestrator | 2026-03-31 04:39:51.875172 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-31 04:39:51.875180 | orchestrator | Tuesday 31 March 2026 04:39:45 +0000 (0:00:00.255) 0:05:18.583 ********* 2026-03-31 04:39:51.875187 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-03-31 04:39:51.875195 | orchestrator | 2026-03-31 04:39:51.875203 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-31 04:39:51.875211 | orchestrator | Tuesday 31 March 2026 04:39:46 +0000 (0:00:00.246) 0:05:18.830 ********* 2026-03-31 04:39:51.875219 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-03-31 04:39:51.875227 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-31 04:39:51.875235 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-31 04:39:51.875243 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-31 04:39:51.875251 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-31 04:39:51.875258 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-31 04:39:51.875271 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-31 04:39:51.875284 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-31 04:40:02.528525 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-31 04:40:02.528673 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-31 04:40:02.528700 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-31 04:40:02.528720 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-31 04:40:02.528738 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-31 04:40:02.528757 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-31 04:40:02.528776 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-03-31 04:40:02.528796 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-03-31 04:40:02.528814 | orchestrator | 2026-03-31 04:40:02.528833 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-31 04:40:02.528852 | orchestrator | Tuesday 31 March 2026 04:39:51 +0000 (0:00:05.703) 0:05:24.533 ********* 2026-03-31 04:40:02.528870 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:40:02.528888 | orchestrator | 2026-03-31 04:40:02.528907 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-31 04:40:02.528924 | orchestrator | Tuesday 31 March 2026 04:39:51 +0000 (0:00:00.125) 0:05:24.659 ********* 2026-03-31 04:40:02.528942 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:40:02.528961 | orchestrator | 2026-03-31 04:40:02.528980 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-31 04:40:02.528998 | orchestrator | Tuesday 31 March 2026 04:39:52 +0000 (0:00:00.162) 0:05:24.821 ********* 2026-03-31 04:40:02.529016 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:40:02.529036 | orchestrator | 2026-03-31 04:40:02.529086 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-31 04:40:02.529106 | orchestrator | Tuesday 31 March 2026 04:39:52 +0000 (0:00:00.138) 0:05:24.959 ********* 2026-03-31 04:40:02.529124 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:40:02.529142 | orchestrator | 2026-03-31 04:40:02.529160 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-31 04:40:02.529179 | orchestrator | Tuesday 31 March 2026 04:39:52 +0000 (0:00:00.413) 0:05:25.373 ********* 2026-03-31 04:40:02.529196 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:40:02.529247 | orchestrator | 2026-03-31 04:40:02.529266 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-31 04:40:02.529284 | orchestrator | Tuesday 31 March 2026 04:39:52 +0000 (0:00:00.136) 0:05:25.509 ********* 2026-03-31 04:40:02.529303 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:40:02.529322 | orchestrator | 2026-03-31 04:40:02.529339 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-31 04:40:02.529360 | orchestrator | Tuesday 31 March 2026 04:39:52 +0000 (0:00:00.142) 0:05:25.652 ********* 2026-03-31 04:40:02.529378 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:40:02.529395 | orchestrator | 2026-03-31 04:40:02.529414 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-31 04:40:02.529433 | orchestrator | Tuesday 31 March 2026 04:39:53 +0000 (0:00:00.176) 0:05:25.828 ********* 2026-03-31 04:40:02.529450 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:40:02.529467 | orchestrator | 2026-03-31 04:40:02.529486 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-31 04:40:02.529504 | orchestrator | Tuesday 31 March 2026 04:39:53 +0000 (0:00:00.122) 0:05:25.951 ********* 2026-03-31 04:40:02.529522 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:40:02.529541 | orchestrator | 2026-03-31 04:40:02.529558 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-31 04:40:02.529576 | orchestrator | Tuesday 31 March 2026 04:39:53 +0000 (0:00:00.154) 0:05:26.105 ********* 2026-03-31 04:40:02.529594 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:40:02.529611 | orchestrator | 2026-03-31 04:40:02.529629 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-31 04:40:02.529648 | orchestrator | Tuesday 31 March 2026 04:39:53 +0000 (0:00:00.136) 0:05:26.242 ********* 2026-03-31 04:40:02.529666 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:40:02.529683 | orchestrator | 2026-03-31 04:40:02.529702 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-31 04:40:02.529720 | orchestrator | Tuesday 31 March 2026 04:39:53 +0000 (0:00:00.134) 0:05:26.377 ********* 2026-03-31 04:40:02.529739 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:40:02.529757 | orchestrator | 2026-03-31 04:40:02.529775 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-31 04:40:02.529794 | orchestrator | Tuesday 31 March 2026 04:39:53 +0000 (0:00:00.138) 0:05:26.515 ********* 2026-03-31 04:40:02.529813 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:40:02.529831 | orchestrator | 2026-03-31 04:40:02.529849 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-31 04:40:02.529867 | orchestrator | Tuesday 31 March 2026 04:39:54 +0000 (0:00:00.252) 0:05:26.768 ********* 2026-03-31 04:40:02.529883 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:40:02.529899 | orchestrator | 2026-03-31 04:40:02.529919 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-31 04:40:02.529937 | orchestrator | Tuesday 31 March 2026 04:39:54 +0000 (0:00:00.137) 0:05:26.906 ********* 2026-03-31 04:40:02.529955 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:40:02.529973 | orchestrator | 2026-03-31 04:40:02.529989 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-31 04:40:02.530009 | orchestrator | Tuesday 31 March 2026 04:39:54 +0000 (0:00:00.221) 0:05:27.127 ********* 2026-03-31 04:40:02.530143 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:40:02.530165 | orchestrator | 2026-03-31 04:40:02.530200 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-31 04:40:02.530234 | orchestrator | Tuesday 31 March 2026 04:39:54 +0000 (0:00:00.128) 0:05:27.256 ********* 2026-03-31 04:40:02.530275 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:40:02.530298 | orchestrator | 2026-03-31 04:40:02.530348 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-31 04:40:02.530389 | orchestrator | Tuesday 31 March 2026 04:39:55 +0000 (0:00:00.446) 0:05:27.702 ********* 2026-03-31 04:40:02.530407 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:40:02.530427 | orchestrator | 2026-03-31 04:40:02.530446 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-31 04:40:02.530465 | orchestrator | Tuesday 31 March 2026 04:39:55 +0000 (0:00:00.128) 0:05:27.830 ********* 2026-03-31 04:40:02.530485 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:40:02.530504 | orchestrator | 2026-03-31 04:40:02.530523 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-31 04:40:02.530542 | orchestrator | Tuesday 31 March 2026 04:39:55 +0000 (0:00:00.149) 0:05:27.979 ********* 2026-03-31 04:40:02.530561 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:40:02.530579 | orchestrator | 2026-03-31 04:40:02.530598 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-31 04:40:02.530619 | orchestrator | Tuesday 31 March 2026 04:39:55 +0000 (0:00:00.162) 0:05:28.142 ********* 2026-03-31 04:40:02.530640 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:40:02.530659 | orchestrator | 2026-03-31 04:40:02.530678 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-31 04:40:02.530699 | orchestrator | Tuesday 31 March 2026 04:39:55 +0000 (0:00:00.133) 0:05:28.276 ********* 2026-03-31 04:40:02.530718 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-31 04:40:02.530738 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-31 04:40:02.530754 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-31 04:40:02.530773 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:40:02.530793 | orchestrator | 2026-03-31 04:40:02.530813 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-31 04:40:02.530833 | orchestrator | Tuesday 31 March 2026 04:39:56 +0000 (0:00:00.411) 0:05:28.688 ********* 2026-03-31 04:40:02.530853 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-31 04:40:02.530872 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-31 04:40:02.530892 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-31 04:40:02.530911 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:40:02.530929 | orchestrator | 2026-03-31 04:40:02.530948 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-31 04:40:02.530969 | orchestrator | Tuesday 31 March 2026 04:39:56 +0000 (0:00:00.411) 0:05:29.099 ********* 2026-03-31 04:40:02.530987 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-31 04:40:02.531004 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-31 04:40:02.531015 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-31 04:40:02.531026 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:40:02.531037 | orchestrator | 2026-03-31 04:40:02.531048 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-31 04:40:02.531085 | orchestrator | Tuesday 31 March 2026 04:39:56 +0000 (0:00:00.390) 0:05:29.490 ********* 2026-03-31 04:40:02.531096 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:40:02.531107 | orchestrator | 2026-03-31 04:40:02.531118 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-31 04:40:02.531129 | orchestrator | Tuesday 31 March 2026 04:39:56 +0000 (0:00:00.137) 0:05:29.627 ********* 2026-03-31 04:40:02.531141 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-31 04:40:02.531152 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:40:02.531163 | orchestrator | 2026-03-31 04:40:02.531174 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-31 04:40:02.531185 | orchestrator | Tuesday 31 March 2026 04:39:57 +0000 (0:00:00.336) 0:05:29.964 ********* 2026-03-31 04:40:02.531196 | orchestrator | changed: [testbed-node-1] 2026-03-31 04:40:02.531206 | orchestrator | 2026-03-31 04:40:02.531218 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-31 04:40:02.531243 | orchestrator | Tuesday 31 March 2026 04:39:58 +0000 (0:00:00.764) 0:05:30.728 ********* 2026-03-31 04:40:02.531254 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:40:02.531265 | orchestrator | 2026-03-31 04:40:02.531276 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-31 04:40:02.531287 | orchestrator | Tuesday 31 March 2026 04:39:58 +0000 (0:00:00.153) 0:05:30.882 ********* 2026-03-31 04:40:02.531298 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-1 2026-03-31 04:40:02.531310 | orchestrator | 2026-03-31 04:40:02.531321 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-31 04:40:02.531331 | orchestrator | Tuesday 31 March 2026 04:39:58 +0000 (0:00:00.527) 0:05:31.409 ********* 2026-03-31 04:40:02.531342 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] 2026-03-31 04:40:02.531353 | orchestrator | 2026-03-31 04:40:02.531364 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-31 04:40:02.531375 | orchestrator | Tuesday 31 March 2026 04:40:00 +0000 (0:00:02.205) 0:05:33.614 ********* 2026-03-31 04:40:02.531386 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:40:02.531397 | orchestrator | 2026-03-31 04:40:02.531408 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-31 04:40:02.531419 | orchestrator | Tuesday 31 March 2026 04:40:01 +0000 (0:00:00.177) 0:05:33.792 ********* 2026-03-31 04:40:02.531430 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:40:02.531441 | orchestrator | 2026-03-31 04:40:02.531452 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-31 04:40:02.531463 | orchestrator | Tuesday 31 March 2026 04:40:01 +0000 (0:00:00.158) 0:05:33.951 ********* 2026-03-31 04:40:02.531474 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:40:02.531485 | orchestrator | 2026-03-31 04:40:02.531496 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-31 04:40:02.531517 | orchestrator | Tuesday 31 March 2026 04:40:01 +0000 (0:00:00.192) 0:05:34.143 ********* 2026-03-31 04:40:02.531543 | orchestrator | changed: [testbed-node-1] 2026-03-31 04:40:55.025170 | orchestrator | 2026-03-31 04:40:55.025291 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-31 04:40:55.025310 | orchestrator | Tuesday 31 March 2026 04:40:02 +0000 (0:00:01.052) 0:05:35.196 ********* 2026-03-31 04:40:55.025323 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:40:55.025336 | orchestrator | 2026-03-31 04:40:55.025348 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-31 04:40:55.025359 | orchestrator | Tuesday 31 March 2026 04:40:03 +0000 (0:00:00.646) 0:05:35.843 ********* 2026-03-31 04:40:55.025370 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:40:55.025382 | orchestrator | 2026-03-31 04:40:55.025393 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-31 04:40:55.025404 | orchestrator | Tuesday 31 March 2026 04:40:03 +0000 (0:00:00.461) 0:05:36.304 ********* 2026-03-31 04:40:55.025416 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:40:55.025427 | orchestrator | 2026-03-31 04:40:55.025438 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-31 04:40:55.025449 | orchestrator | Tuesday 31 March 2026 04:40:04 +0000 (0:00:00.488) 0:05:36.793 ********* 2026-03-31 04:40:55.025460 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-03-31 04:40:55.025472 | orchestrator | 2026-03-31 04:40:55.025483 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-31 04:40:55.025494 | orchestrator | Tuesday 31 March 2026 04:40:04 +0000 (0:00:00.551) 0:05:37.345 ********* 2026-03-31 04:40:55.025505 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-03-31 04:40:55.025516 | orchestrator | 2026-03-31 04:40:55.025527 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-31 04:40:55.025539 | orchestrator | Tuesday 31 March 2026 04:40:05 +0000 (0:00:00.525) 0:05:37.871 ********* 2026-03-31 04:40:55.025550 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 04:40:55.025584 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-31 04:40:55.025596 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-31 04:40:55.025607 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-03-31 04:40:55.025618 | orchestrator | 2026-03-31 04:40:55.025630 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-31 04:40:55.025641 | orchestrator | Tuesday 31 March 2026 04:40:08 +0000 (0:00:03.002) 0:05:40.873 ********* 2026-03-31 04:40:55.025652 | orchestrator | changed: [testbed-node-1] 2026-03-31 04:40:55.025664 | orchestrator | 2026-03-31 04:40:55.025678 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-31 04:40:55.025691 | orchestrator | Tuesday 31 March 2026 04:40:09 +0000 (0:00:01.203) 0:05:42.076 ********* 2026-03-31 04:40:55.025703 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:40:55.025716 | orchestrator | 2026-03-31 04:40:55.025728 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-31 04:40:55.025741 | orchestrator | Tuesday 31 March 2026 04:40:09 +0000 (0:00:00.154) 0:05:42.231 ********* 2026-03-31 04:40:55.025754 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:40:55.025767 | orchestrator | 2026-03-31 04:40:55.025780 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-31 04:40:55.025793 | orchestrator | Tuesday 31 March 2026 04:40:09 +0000 (0:00:00.131) 0:05:42.362 ********* 2026-03-31 04:40:55.025805 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:40:55.025818 | orchestrator | 2026-03-31 04:40:55.025831 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-31 04:40:55.025843 | orchestrator | Tuesday 31 March 2026 04:40:10 +0000 (0:00:00.700) 0:05:43.063 ********* 2026-03-31 04:40:55.025856 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:40:55.025869 | orchestrator | 2026-03-31 04:40:55.025882 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-31 04:40:55.025895 | orchestrator | Tuesday 31 March 2026 04:40:10 +0000 (0:00:00.475) 0:05:43.539 ********* 2026-03-31 04:40:55.025908 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:40:55.025921 | orchestrator | 2026-03-31 04:40:55.025935 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-31 04:40:55.025947 | orchestrator | Tuesday 31 March 2026 04:40:10 +0000 (0:00:00.138) 0:05:43.677 ********* 2026-03-31 04:40:55.025960 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-1 2026-03-31 04:40:55.025974 | orchestrator | 2026-03-31 04:40:55.025987 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-31 04:40:55.026000 | orchestrator | Tuesday 31 March 2026 04:40:11 +0000 (0:00:00.222) 0:05:43.899 ********* 2026-03-31 04:40:55.026013 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:40:55.026090 | orchestrator | 2026-03-31 04:40:55.026102 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-31 04:40:55.026138 | orchestrator | Tuesday 31 March 2026 04:40:11 +0000 (0:00:00.140) 0:05:44.040 ********* 2026-03-31 04:40:55.026151 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:40:55.026163 | orchestrator | 2026-03-31 04:40:55.026174 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-31 04:40:55.026185 | orchestrator | Tuesday 31 March 2026 04:40:11 +0000 (0:00:00.143) 0:05:44.184 ********* 2026-03-31 04:40:55.026196 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-1 2026-03-31 04:40:55.026207 | orchestrator | 2026-03-31 04:40:55.026218 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-31 04:40:55.026229 | orchestrator | Tuesday 31 March 2026 04:40:11 +0000 (0:00:00.210) 0:05:44.395 ********* 2026-03-31 04:40:55.026240 | orchestrator | changed: [testbed-node-1] 2026-03-31 04:40:55.026251 | orchestrator | 2026-03-31 04:40:55.026262 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-31 04:40:55.026273 | orchestrator | Tuesday 31 March 2026 04:40:13 +0000 (0:00:01.669) 0:05:46.065 ********* 2026-03-31 04:40:55.026293 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:40:55.026304 | orchestrator | 2026-03-31 04:40:55.026330 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-31 04:40:55.026342 | orchestrator | Tuesday 31 March 2026 04:40:14 +0000 (0:00:01.238) 0:05:47.303 ********* 2026-03-31 04:40:55.026372 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:40:55.026384 | orchestrator | 2026-03-31 04:40:55.026395 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-31 04:40:55.026406 | orchestrator | Tuesday 31 March 2026 04:40:15 +0000 (0:00:01.365) 0:05:48.669 ********* 2026-03-31 04:40:55.026417 | orchestrator | changed: [testbed-node-1] 2026-03-31 04:40:55.026428 | orchestrator | 2026-03-31 04:40:55.026439 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-31 04:40:55.026450 | orchestrator | Tuesday 31 March 2026 04:40:18 +0000 (0:00:02.205) 0:05:50.875 ********* 2026-03-31 04:40:55.026461 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-1 2026-03-31 04:40:55.026472 | orchestrator | 2026-03-31 04:40:55.026483 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-31 04:40:55.026494 | orchestrator | Tuesday 31 March 2026 04:40:18 +0000 (0:00:00.232) 0:05:51.107 ********* 2026-03-31 04:40:55.026505 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-03-31 04:40:55.026516 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:40:55.026527 | orchestrator | 2026-03-31 04:40:55.026538 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-31 04:40:55.026549 | orchestrator | Tuesday 31 March 2026 04:40:40 +0000 (0:00:21.908) 0:06:13.016 ********* 2026-03-31 04:40:55.026560 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:40:55.026570 | orchestrator | 2026-03-31 04:40:55.026581 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-31 04:40:55.026592 | orchestrator | Tuesday 31 March 2026 04:40:42 +0000 (0:00:02.091) 0:06:15.108 ********* 2026-03-31 04:40:55.026603 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:40:55.026614 | orchestrator | 2026-03-31 04:40:55.026625 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-31 04:40:55.026636 | orchestrator | Tuesday 31 March 2026 04:40:42 +0000 (0:00:00.134) 0:06:15.242 ********* 2026-03-31 04:40:55.026649 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__844b88a37a697fc95420139c4fef42975660f41e'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-31 04:40:55.026664 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__844b88a37a697fc95420139c4fef42975660f41e'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-31 04:40:55.026675 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__844b88a37a697fc95420139c4fef42975660f41e'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-31 04:40:55.026687 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__844b88a37a697fc95420139c4fef42975660f41e'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-31 04:40:55.026699 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__844b88a37a697fc95420139c4fef42975660f41e'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-31 04:40:55.026719 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__844b88a37a697fc95420139c4fef42975660f41e'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__844b88a37a697fc95420139c4fef42975660f41e'}])  2026-03-31 04:40:55.026732 | orchestrator | 2026-03-31 04:40:55.026744 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-03-31 04:40:55.026755 | orchestrator | Tuesday 31 March 2026 04:40:51 +0000 (0:00:08.852) 0:06:24.095 ********* 2026-03-31 04:40:55.026766 | orchestrator | changed: [testbed-node-1] 2026-03-31 04:40:55.026777 | orchestrator | 2026-03-31 04:40:55.026788 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-31 04:40:55.026804 | orchestrator | Tuesday 31 March 2026 04:40:53 +0000 (0:00:02.371) 0:06:26.466 ********* 2026-03-31 04:40:55.026822 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:41:06.952827 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-03-31 04:41:06.952984 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-03-31 04:41:06.953002 | orchestrator | 2026-03-31 04:41:06.953016 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-31 04:41:06.953029 | orchestrator | Tuesday 31 March 2026 04:40:55 +0000 (0:00:01.228) 0:06:27.695 ********* 2026-03-31 04:41:06.953041 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-31 04:41:06.953053 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-31 04:41:06.953064 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-31 04:41:06.953077 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:41:06.953095 | orchestrator | 2026-03-31 04:41:06.953113 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-03-31 04:41:06.953160 | orchestrator | Tuesday 31 March 2026 04:40:56 +0000 (0:00:01.267) 0:06:28.962 ********* 2026-03-31 04:41:06.953178 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:41:06.953196 | orchestrator | 2026-03-31 04:41:06.953214 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-03-31 04:41:06.953231 | orchestrator | Tuesday 31 March 2026 04:40:56 +0000 (0:00:00.116) 0:06:29.078 ********* 2026-03-31 04:41:06.953250 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:41:06.953270 | orchestrator | 2026-03-31 04:41:06.953289 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-31 04:41:06.953309 | orchestrator | Tuesday 31 March 2026 04:40:57 +0000 (0:00:01.388) 0:06:30.467 ********* 2026-03-31 04:41:06.953327 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:41:06.953345 | orchestrator | 2026-03-31 04:41:06.953361 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-31 04:41:06.953372 | orchestrator | Tuesday 31 March 2026 04:40:57 +0000 (0:00:00.139) 0:06:30.606 ********* 2026-03-31 04:41:06.953387 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:41:06.953407 | orchestrator | 2026-03-31 04:41:06.953426 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-31 04:41:06.953445 | orchestrator | Tuesday 31 March 2026 04:40:58 +0000 (0:00:00.162) 0:06:30.769 ********* 2026-03-31 04:41:06.953464 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:41:06.953483 | orchestrator | 2026-03-31 04:41:06.953502 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-31 04:41:06.953523 | orchestrator | Tuesday 31 March 2026 04:40:58 +0000 (0:00:00.126) 0:06:30.895 ********* 2026-03-31 04:41:06.953568 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:41:06.953581 | orchestrator | 2026-03-31 04:41:06.953594 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-31 04:41:06.953607 | orchestrator | Tuesday 31 March 2026 04:40:58 +0000 (0:00:00.123) 0:06:31.019 ********* 2026-03-31 04:41:06.953621 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:41:06.953639 | orchestrator | 2026-03-31 04:41:06.953657 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-31 04:41:06.953676 | orchestrator | Tuesday 31 March 2026 04:40:58 +0000 (0:00:00.132) 0:06:31.151 ********* 2026-03-31 04:41:06.953695 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:41:06.953714 | orchestrator | 2026-03-31 04:41:06.953731 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-31 04:41:06.953742 | orchestrator | Tuesday 31 March 2026 04:40:58 +0000 (0:00:00.134) 0:06:31.286 ********* 2026-03-31 04:41:06.953753 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:41:06.953764 | orchestrator | 2026-03-31 04:41:06.953775 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-03-31 04:41:06.953793 | orchestrator | 2026-03-31 04:41:06.953810 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-03-31 04:41:06.953829 | orchestrator | Tuesday 31 March 2026 04:40:58 +0000 (0:00:00.216) 0:06:31.502 ********* 2026-03-31 04:41:06.953848 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:41:06.953868 | orchestrator | 2026-03-31 04:41:06.953880 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-03-31 04:41:06.953892 | orchestrator | Tuesday 31 March 2026 04:40:59 +0000 (0:00:00.490) 0:06:31.993 ********* 2026-03-31 04:41:06.953903 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:41:06.953913 | orchestrator | 2026-03-31 04:41:06.953924 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-03-31 04:41:06.953936 | orchestrator | Tuesday 31 March 2026 04:40:59 +0000 (0:00:00.169) 0:06:32.162 ********* 2026-03-31 04:41:06.953946 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:06.953957 | orchestrator | 2026-03-31 04:41:06.953968 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-03-31 04:41:06.953979 | orchestrator | Tuesday 31 March 2026 04:40:59 +0000 (0:00:00.408) 0:06:32.571 ********* 2026-03-31 04:41:06.953990 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:41:06.954001 | orchestrator | 2026-03-31 04:41:06.954012 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-31 04:41:06.954107 | orchestrator | Tuesday 31 March 2026 04:41:00 +0000 (0:00:00.162) 0:06:32.734 ********* 2026-03-31 04:41:06.954118 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-03-31 04:41:06.954177 | orchestrator | 2026-03-31 04:41:06.954190 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-31 04:41:06.954204 | orchestrator | Tuesday 31 March 2026 04:41:00 +0000 (0:00:00.255) 0:06:32.989 ********* 2026-03-31 04:41:06.954222 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:41:06.954240 | orchestrator | 2026-03-31 04:41:06.954259 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-31 04:41:06.954278 | orchestrator | Tuesday 31 March 2026 04:41:00 +0000 (0:00:00.479) 0:06:33.469 ********* 2026-03-31 04:41:06.954297 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:41:06.954315 | orchestrator | 2026-03-31 04:41:06.954334 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-31 04:41:06.954365 | orchestrator | Tuesday 31 March 2026 04:41:00 +0000 (0:00:00.138) 0:06:33.608 ********* 2026-03-31 04:41:06.954384 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:41:06.954402 | orchestrator | 2026-03-31 04:41:06.954447 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-31 04:41:06.954469 | orchestrator | Tuesday 31 March 2026 04:41:01 +0000 (0:00:00.442) 0:06:34.050 ********* 2026-03-31 04:41:06.954488 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:41:06.954527 | orchestrator | 2026-03-31 04:41:06.954539 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-31 04:41:06.954550 | orchestrator | Tuesday 31 March 2026 04:41:01 +0000 (0:00:00.148) 0:06:34.199 ********* 2026-03-31 04:41:06.954561 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:41:06.954572 | orchestrator | 2026-03-31 04:41:06.954583 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-31 04:41:06.954594 | orchestrator | Tuesday 31 March 2026 04:41:01 +0000 (0:00:00.143) 0:06:34.342 ********* 2026-03-31 04:41:06.954605 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:41:06.954615 | orchestrator | 2026-03-31 04:41:06.954627 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-31 04:41:06.954638 | orchestrator | Tuesday 31 March 2026 04:41:01 +0000 (0:00:00.155) 0:06:34.498 ********* 2026-03-31 04:41:06.954649 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:06.954659 | orchestrator | 2026-03-31 04:41:06.954670 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-31 04:41:06.954681 | orchestrator | Tuesday 31 March 2026 04:41:01 +0000 (0:00:00.144) 0:06:34.642 ********* 2026-03-31 04:41:06.954698 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:41:06.954717 | orchestrator | 2026-03-31 04:41:06.954736 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-31 04:41:06.954755 | orchestrator | Tuesday 31 March 2026 04:41:02 +0000 (0:00:00.124) 0:06:34.766 ********* 2026-03-31 04:41:06.954774 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:41:06.954794 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:41:06.954813 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-31 04:41:06.954829 | orchestrator | 2026-03-31 04:41:06.954840 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-31 04:41:06.954851 | orchestrator | Tuesday 31 March 2026 04:41:03 +0000 (0:00:00.954) 0:06:35.720 ********* 2026-03-31 04:41:06.954862 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:41:06.954873 | orchestrator | 2026-03-31 04:41:06.954884 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-31 04:41:06.954895 | orchestrator | Tuesday 31 March 2026 04:41:03 +0000 (0:00:00.832) 0:06:36.553 ********* 2026-03-31 04:41:06.954906 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:41:06.954917 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:41:06.954928 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-31 04:41:06.954939 | orchestrator | 2026-03-31 04:41:06.954950 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-31 04:41:06.954961 | orchestrator | Tuesday 31 March 2026 04:41:05 +0000 (0:00:01.840) 0:06:38.394 ********* 2026-03-31 04:41:06.954972 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-31 04:41:06.954983 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-31 04:41:06.954994 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-31 04:41:06.955005 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:06.955016 | orchestrator | 2026-03-31 04:41:06.955027 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-31 04:41:06.955038 | orchestrator | Tuesday 31 March 2026 04:41:06 +0000 (0:00:00.448) 0:06:38.842 ********* 2026-03-31 04:41:06.955051 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-31 04:41:06.955066 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-31 04:41:06.955086 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-31 04:41:06.955097 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:06.955108 | orchestrator | 2026-03-31 04:41:06.955119 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-31 04:41:06.955183 | orchestrator | Tuesday 31 March 2026 04:41:06 +0000 (0:00:00.613) 0:06:39.455 ********* 2026-03-31 04:41:06.955197 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 04:41:06.955230 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 04:41:10.738209 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 04:41:10.738298 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:10.738311 | orchestrator | 2026-03-31 04:41:10.738320 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-31 04:41:10.738329 | orchestrator | Tuesday 31 March 2026 04:41:06 +0000 (0:00:00.165) 0:06:39.621 ********* 2026-03-31 04:41:10.738342 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '2a470704af4f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-31 04:41:04.382696', 'end': '2026-03-31 04:41:04.434424', 'delta': '0:00:00.051728', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2a470704af4f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-31 04:41:10.738356 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '72281537ffe8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-31 04:41:04.913968', 'end': '2026-03-31 04:41:04.962155', 'delta': '0:00:00.048187', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['72281537ffe8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-31 04:41:10.738365 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'df3f30930c20', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-31 04:41:05.513869', 'end': '2026-03-31 04:41:05.564694', 'delta': '0:00:00.050825', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['df3f30930c20'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-31 04:41:10.738395 | orchestrator | 2026-03-31 04:41:10.738403 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-31 04:41:10.738411 | orchestrator | Tuesday 31 March 2026 04:41:07 +0000 (0:00:00.182) 0:06:39.803 ********* 2026-03-31 04:41:10.738418 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:41:10.738426 | orchestrator | 2026-03-31 04:41:10.738434 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-31 04:41:10.738441 | orchestrator | Tuesday 31 March 2026 04:41:07 +0000 (0:00:00.267) 0:06:40.070 ********* 2026-03-31 04:41:10.738449 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:10.738456 | orchestrator | 2026-03-31 04:41:10.738463 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-31 04:41:10.738471 | orchestrator | Tuesday 31 March 2026 04:41:07 +0000 (0:00:00.269) 0:06:40.339 ********* 2026-03-31 04:41:10.738478 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:41:10.738485 | orchestrator | 2026-03-31 04:41:10.738493 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-31 04:41:10.738500 | orchestrator | Tuesday 31 March 2026 04:41:07 +0000 (0:00:00.144) 0:06:40.484 ********* 2026-03-31 04:41:10.738508 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] 2026-03-31 04:41:10.738515 | orchestrator | 2026-03-31 04:41:10.738534 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-31 04:41:10.738542 | orchestrator | Tuesday 31 March 2026 04:41:08 +0000 (0:00:00.953) 0:06:41.437 ********* 2026-03-31 04:41:10.738550 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:41:10.738557 | orchestrator | 2026-03-31 04:41:10.738564 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-31 04:41:10.738572 | orchestrator | Tuesday 31 March 2026 04:41:08 +0000 (0:00:00.153) 0:06:41.591 ********* 2026-03-31 04:41:10.738593 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:10.738601 | orchestrator | 2026-03-31 04:41:10.738609 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-31 04:41:10.738616 | orchestrator | Tuesday 31 March 2026 04:41:09 +0000 (0:00:00.119) 0:06:41.711 ********* 2026-03-31 04:41:10.738623 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:10.738631 | orchestrator | 2026-03-31 04:41:10.738638 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-31 04:41:10.738645 | orchestrator | Tuesday 31 March 2026 04:41:09 +0000 (0:00:00.214) 0:06:41.926 ********* 2026-03-31 04:41:10.738653 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:10.738660 | orchestrator | 2026-03-31 04:41:10.738667 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-31 04:41:10.738675 | orchestrator | Tuesday 31 March 2026 04:41:09 +0000 (0:00:00.397) 0:06:42.323 ********* 2026-03-31 04:41:10.738682 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:10.738689 | orchestrator | 2026-03-31 04:41:10.738697 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-31 04:41:10.738704 | orchestrator | Tuesday 31 March 2026 04:41:09 +0000 (0:00:00.137) 0:06:42.460 ********* 2026-03-31 04:41:10.738711 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:10.738719 | orchestrator | 2026-03-31 04:41:10.738726 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-31 04:41:10.738734 | orchestrator | Tuesday 31 March 2026 04:41:09 +0000 (0:00:00.126) 0:06:42.587 ********* 2026-03-31 04:41:10.738743 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:10.738752 | orchestrator | 2026-03-31 04:41:10.738760 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-31 04:41:10.738774 | orchestrator | Tuesday 31 March 2026 04:41:10 +0000 (0:00:00.160) 0:06:42.747 ********* 2026-03-31 04:41:10.738783 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:10.738791 | orchestrator | 2026-03-31 04:41:10.738800 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-31 04:41:10.738809 | orchestrator | Tuesday 31 March 2026 04:41:10 +0000 (0:00:00.119) 0:06:42.867 ********* 2026-03-31 04:41:10.738818 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:10.738826 | orchestrator | 2026-03-31 04:41:10.738834 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-31 04:41:10.738843 | orchestrator | Tuesday 31 March 2026 04:41:10 +0000 (0:00:00.144) 0:06:43.011 ********* 2026-03-31 04:41:10.738852 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:10.738860 | orchestrator | 2026-03-31 04:41:10.738868 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-31 04:41:10.738877 | orchestrator | Tuesday 31 March 2026 04:41:10 +0000 (0:00:00.134) 0:06:43.146 ********* 2026-03-31 04:41:10.738886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:41:10.738895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:41:10.738904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:41:10.738914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-31-01-38-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-31 04:41:10.738923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:41:10.738937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:41:10.969722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:41:10.969874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4', 'scsi-SQEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '49050c5a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part16', 'scsi-SQEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part14', 'scsi-SQEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part15', 'scsi-SQEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part1', 'scsi-SQEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-31 04:41:10.969925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:41:10.969936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:41:10.969944 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:10.969951 | orchestrator | 2026-03-31 04:41:10.969958 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-31 04:41:10.969965 | orchestrator | Tuesday 31 March 2026 04:41:10 +0000 (0:00:00.263) 0:06:43.410 ********* 2026-03-31 04:41:10.969988 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:41:10.970002 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:41:10.970008 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:41:10.970070 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-31-01-38-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:41:10.970078 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:41:10.970088 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:41:10.970094 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:41:10.970112 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4', 'scsi-SQEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '49050c5a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part16', 'scsi-SQEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part14', 'scsi-SQEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part15', 'scsi-SQEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part1', 'scsi-SQEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:41:23.753774 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:41:23.753884 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:41:23.753897 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:23.753908 | orchestrator | 2026-03-31 04:41:23.753916 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-31 04:41:23.753960 | orchestrator | Tuesday 31 March 2026 04:41:10 +0000 (0:00:00.228) 0:06:43.639 ********* 2026-03-31 04:41:23.753969 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:41:23.753977 | orchestrator | 2026-03-31 04:41:23.753985 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-31 04:41:23.753992 | orchestrator | Tuesday 31 March 2026 04:41:11 +0000 (0:00:00.527) 0:06:44.166 ********* 2026-03-31 04:41:23.753999 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:41:23.754006 | orchestrator | 2026-03-31 04:41:23.754066 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-31 04:41:23.754074 | orchestrator | Tuesday 31 March 2026 04:41:11 +0000 (0:00:00.130) 0:06:44.297 ********* 2026-03-31 04:41:23.754081 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:41:23.754088 | orchestrator | 2026-03-31 04:41:23.754096 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-31 04:41:23.754110 | orchestrator | Tuesday 31 March 2026 04:41:12 +0000 (0:00:00.487) 0:06:44.784 ********* 2026-03-31 04:41:23.754118 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:23.754125 | orchestrator | 2026-03-31 04:41:23.754132 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-31 04:41:23.754166 | orchestrator | Tuesday 31 March 2026 04:41:12 +0000 (0:00:00.121) 0:06:44.905 ********* 2026-03-31 04:41:23.754174 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:23.754182 | orchestrator | 2026-03-31 04:41:23.754189 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-31 04:41:23.754196 | orchestrator | Tuesday 31 March 2026 04:41:13 +0000 (0:00:00.862) 0:06:45.768 ********* 2026-03-31 04:41:23.754204 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:23.754211 | orchestrator | 2026-03-31 04:41:23.754218 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-31 04:41:23.754225 | orchestrator | Tuesday 31 March 2026 04:41:13 +0000 (0:00:00.154) 0:06:45.922 ********* 2026-03-31 04:41:23.754233 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-31 04:41:23.754240 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-31 04:41:23.754248 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-31 04:41:23.754255 | orchestrator | 2026-03-31 04:41:23.754262 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-31 04:41:23.754269 | orchestrator | Tuesday 31 March 2026 04:41:13 +0000 (0:00:00.657) 0:06:46.580 ********* 2026-03-31 04:41:23.754276 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-31 04:41:23.754284 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-31 04:41:23.754291 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-31 04:41:23.754298 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:23.754305 | orchestrator | 2026-03-31 04:41:23.754313 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-31 04:41:23.754320 | orchestrator | Tuesday 31 March 2026 04:41:14 +0000 (0:00:00.155) 0:06:46.735 ********* 2026-03-31 04:41:23.754329 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:23.754337 | orchestrator | 2026-03-31 04:41:23.754345 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-31 04:41:23.754354 | orchestrator | Tuesday 31 March 2026 04:41:14 +0000 (0:00:00.148) 0:06:46.884 ********* 2026-03-31 04:41:23.754362 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:41:23.754371 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:41:23.754379 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-31 04:41:23.754387 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-31 04:41:23.754395 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-31 04:41:23.754403 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-31 04:41:23.754433 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-31 04:41:23.754443 | orchestrator | 2026-03-31 04:41:23.754451 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-31 04:41:23.754459 | orchestrator | Tuesday 31 March 2026 04:41:14 +0000 (0:00:00.797) 0:06:47.681 ********* 2026-03-31 04:41:23.754467 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:41:23.754475 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:41:23.754483 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-31 04:41:23.754491 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-31 04:41:23.754499 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-31 04:41:23.754508 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-31 04:41:23.754516 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-31 04:41:23.754524 | orchestrator | 2026-03-31 04:41:23.754532 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-03-31 04:41:23.754540 | orchestrator | Tuesday 31 March 2026 04:41:16 +0000 (0:00:01.623) 0:06:49.305 ********* 2026-03-31 04:41:23.754548 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:23.754556 | orchestrator | 2026-03-31 04:41:23.754564 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-03-31 04:41:23.754577 | orchestrator | Tuesday 31 March 2026 04:41:16 +0000 (0:00:00.244) 0:06:49.549 ********* 2026-03-31 04:41:23.754586 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:23.754594 | orchestrator | 2026-03-31 04:41:23.754602 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-03-31 04:41:23.754610 | orchestrator | Tuesday 31 March 2026 04:41:17 +0000 (0:00:00.236) 0:06:49.785 ********* 2026-03-31 04:41:23.754619 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:23.754627 | orchestrator | 2026-03-31 04:41:23.754634 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-03-31 04:41:23.754643 | orchestrator | Tuesday 31 March 2026 04:41:17 +0000 (0:00:00.131) 0:06:49.917 ********* 2026-03-31 04:41:23.754651 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:23.754659 | orchestrator | 2026-03-31 04:41:23.754667 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-03-31 04:41:23.754676 | orchestrator | Tuesday 31 March 2026 04:41:17 +0000 (0:00:00.220) 0:06:50.138 ********* 2026-03-31 04:41:23.754684 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:23.754691 | orchestrator | 2026-03-31 04:41:23.754698 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-03-31 04:41:23.754705 | orchestrator | Tuesday 31 March 2026 04:41:17 +0000 (0:00:00.143) 0:06:50.282 ********* 2026-03-31 04:41:23.754713 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-31 04:41:23.754720 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-31 04:41:23.754727 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-31 04:41:23.754734 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:23.754741 | orchestrator | 2026-03-31 04:41:23.754748 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-03-31 04:41:23.754756 | orchestrator | Tuesday 31 March 2026 04:41:18 +0000 (0:00:01.053) 0:06:51.335 ********* 2026-03-31 04:41:23.754763 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-03-31 04:41:23.754770 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-03-31 04:41:23.754777 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-03-31 04:41:23.754784 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-03-31 04:41:23.754791 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-03-31 04:41:23.754806 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-03-31 04:41:23.754814 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:23.754821 | orchestrator | 2026-03-31 04:41:23.754828 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-03-31 04:41:23.754835 | orchestrator | Tuesday 31 March 2026 04:41:19 +0000 (0:00:00.689) 0:06:52.025 ********* 2026-03-31 04:41:23.754842 | orchestrator | changed: [testbed-node-2] => (item=testbed-node-2) 2026-03-31 04:41:23.754850 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-31 04:41:23.754857 | orchestrator | 2026-03-31 04:41:23.754864 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-03-31 04:41:23.754871 | orchestrator | Tuesday 31 March 2026 04:41:21 +0000 (0:00:02.464) 0:06:54.489 ********* 2026-03-31 04:41:23.754878 | orchestrator | changed: [testbed-node-2] 2026-03-31 04:41:23.754885 | orchestrator | 2026-03-31 04:41:23.754893 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-31 04:41:23.754900 | orchestrator | Tuesday 31 March 2026 04:41:23 +0000 (0:00:01.461) 0:06:55.951 ********* 2026-03-31 04:41:23.754907 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-03-31 04:41:23.754915 | orchestrator | 2026-03-31 04:41:23.754922 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-31 04:41:23.754929 | orchestrator | Tuesday 31 March 2026 04:41:23 +0000 (0:00:00.227) 0:06:56.179 ********* 2026-03-31 04:41:23.754936 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-03-31 04:41:23.754943 | orchestrator | 2026-03-31 04:41:23.754951 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-31 04:41:23.754962 | orchestrator | Tuesday 31 March 2026 04:41:23 +0000 (0:00:00.244) 0:06:56.423 ********* 2026-03-31 04:41:35.270756 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:41:35.270874 | orchestrator | 2026-03-31 04:41:35.270893 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-31 04:41:35.270907 | orchestrator | Tuesday 31 March 2026 04:41:24 +0000 (0:00:00.542) 0:06:56.965 ********* 2026-03-31 04:41:35.270919 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:35.270931 | orchestrator | 2026-03-31 04:41:35.270943 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-31 04:41:35.270954 | orchestrator | Tuesday 31 March 2026 04:41:24 +0000 (0:00:00.129) 0:06:57.095 ********* 2026-03-31 04:41:35.270966 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:35.270977 | orchestrator | 2026-03-31 04:41:35.270988 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-31 04:41:35.271000 | orchestrator | Tuesday 31 March 2026 04:41:24 +0000 (0:00:00.141) 0:06:57.236 ********* 2026-03-31 04:41:35.271011 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:35.271022 | orchestrator | 2026-03-31 04:41:35.271033 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-31 04:41:35.271045 | orchestrator | Tuesday 31 March 2026 04:41:24 +0000 (0:00:00.129) 0:06:57.366 ********* 2026-03-31 04:41:35.271057 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:41:35.271069 | orchestrator | 2026-03-31 04:41:35.271080 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-31 04:41:35.271091 | orchestrator | Tuesday 31 March 2026 04:41:25 +0000 (0:00:00.584) 0:06:57.951 ********* 2026-03-31 04:41:35.271103 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:35.271114 | orchestrator | 2026-03-31 04:41:35.271125 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-31 04:41:35.271183 | orchestrator | Tuesday 31 March 2026 04:41:25 +0000 (0:00:00.383) 0:06:58.334 ********* 2026-03-31 04:41:35.271206 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:35.271228 | orchestrator | 2026-03-31 04:41:35.271246 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-31 04:41:35.271285 | orchestrator | Tuesday 31 March 2026 04:41:25 +0000 (0:00:00.129) 0:06:58.463 ********* 2026-03-31 04:41:35.271299 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:41:35.271313 | orchestrator | 2026-03-31 04:41:35.271326 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-31 04:41:35.271339 | orchestrator | Tuesday 31 March 2026 04:41:26 +0000 (0:00:00.587) 0:06:59.051 ********* 2026-03-31 04:41:35.271355 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:41:35.271374 | orchestrator | 2026-03-31 04:41:35.271393 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-31 04:41:35.271413 | orchestrator | Tuesday 31 March 2026 04:41:26 +0000 (0:00:00.622) 0:06:59.673 ********* 2026-03-31 04:41:35.271434 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:35.271453 | orchestrator | 2026-03-31 04:41:35.271473 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-31 04:41:35.271486 | orchestrator | Tuesday 31 March 2026 04:41:27 +0000 (0:00:00.154) 0:06:59.828 ********* 2026-03-31 04:41:35.271499 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:41:35.271512 | orchestrator | 2026-03-31 04:41:35.271524 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-31 04:41:35.271537 | orchestrator | Tuesday 31 March 2026 04:41:27 +0000 (0:00:00.143) 0:06:59.972 ********* 2026-03-31 04:41:35.271549 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:35.271562 | orchestrator | 2026-03-31 04:41:35.271575 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-31 04:41:35.271594 | orchestrator | Tuesday 31 March 2026 04:41:27 +0000 (0:00:00.131) 0:07:00.103 ********* 2026-03-31 04:41:35.271613 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:35.271631 | orchestrator | 2026-03-31 04:41:35.271649 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-31 04:41:35.271669 | orchestrator | Tuesday 31 March 2026 04:41:27 +0000 (0:00:00.144) 0:07:00.247 ********* 2026-03-31 04:41:35.271688 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:35.271707 | orchestrator | 2026-03-31 04:41:35.271727 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-31 04:41:35.271746 | orchestrator | Tuesday 31 March 2026 04:41:27 +0000 (0:00:00.138) 0:07:00.386 ********* 2026-03-31 04:41:35.271764 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:35.271784 | orchestrator | 2026-03-31 04:41:35.271803 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-31 04:41:35.271816 | orchestrator | Tuesday 31 March 2026 04:41:27 +0000 (0:00:00.150) 0:07:00.536 ********* 2026-03-31 04:41:35.271827 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:35.271838 | orchestrator | 2026-03-31 04:41:35.271849 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-31 04:41:35.271860 | orchestrator | Tuesday 31 March 2026 04:41:27 +0000 (0:00:00.119) 0:07:00.655 ********* 2026-03-31 04:41:35.271871 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:41:35.271882 | orchestrator | 2026-03-31 04:41:35.271893 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-31 04:41:35.271904 | orchestrator | Tuesday 31 March 2026 04:41:28 +0000 (0:00:00.149) 0:07:00.805 ********* 2026-03-31 04:41:35.271916 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:41:35.271927 | orchestrator | 2026-03-31 04:41:35.271938 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-31 04:41:35.271949 | orchestrator | Tuesday 31 March 2026 04:41:28 +0000 (0:00:00.156) 0:07:00.962 ********* 2026-03-31 04:41:35.271960 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:41:35.271971 | orchestrator | 2026-03-31 04:41:35.271982 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-31 04:41:35.271993 | orchestrator | Tuesday 31 March 2026 04:41:28 +0000 (0:00:00.527) 0:07:01.489 ********* 2026-03-31 04:41:35.272004 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:35.272015 | orchestrator | 2026-03-31 04:41:35.272026 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-31 04:41:35.272047 | orchestrator | Tuesday 31 March 2026 04:41:28 +0000 (0:00:00.138) 0:07:01.628 ********* 2026-03-31 04:41:35.272058 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:35.272069 | orchestrator | 2026-03-31 04:41:35.272080 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-31 04:41:35.272111 | orchestrator | Tuesday 31 March 2026 04:41:29 +0000 (0:00:00.133) 0:07:01.762 ********* 2026-03-31 04:41:35.272123 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:35.272134 | orchestrator | 2026-03-31 04:41:35.272145 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-31 04:41:35.272184 | orchestrator | Tuesday 31 March 2026 04:41:29 +0000 (0:00:00.139) 0:07:01.901 ********* 2026-03-31 04:41:35.272195 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:35.272206 | orchestrator | 2026-03-31 04:41:35.272217 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-31 04:41:35.272228 | orchestrator | Tuesday 31 March 2026 04:41:29 +0000 (0:00:00.143) 0:07:02.045 ********* 2026-03-31 04:41:35.272239 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:35.272250 | orchestrator | 2026-03-31 04:41:35.272261 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-31 04:41:35.272271 | orchestrator | Tuesday 31 March 2026 04:41:29 +0000 (0:00:00.110) 0:07:02.156 ********* 2026-03-31 04:41:35.272282 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:35.272293 | orchestrator | 2026-03-31 04:41:35.272304 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-31 04:41:35.272315 | orchestrator | Tuesday 31 March 2026 04:41:29 +0000 (0:00:00.122) 0:07:02.278 ********* 2026-03-31 04:41:35.272326 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:35.272337 | orchestrator | 2026-03-31 04:41:35.272348 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-31 04:41:35.272360 | orchestrator | Tuesday 31 March 2026 04:41:29 +0000 (0:00:00.119) 0:07:02.397 ********* 2026-03-31 04:41:35.272378 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:35.272389 | orchestrator | 2026-03-31 04:41:35.272400 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-31 04:41:35.272410 | orchestrator | Tuesday 31 March 2026 04:41:29 +0000 (0:00:00.120) 0:07:02.517 ********* 2026-03-31 04:41:35.272421 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:35.272432 | orchestrator | 2026-03-31 04:41:35.272443 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-31 04:41:35.272454 | orchestrator | Tuesday 31 March 2026 04:41:29 +0000 (0:00:00.126) 0:07:02.644 ********* 2026-03-31 04:41:35.272465 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:35.272476 | orchestrator | 2026-03-31 04:41:35.272487 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-31 04:41:35.272498 | orchestrator | Tuesday 31 March 2026 04:41:30 +0000 (0:00:00.130) 0:07:02.775 ********* 2026-03-31 04:41:35.272509 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:35.272519 | orchestrator | 2026-03-31 04:41:35.272530 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-31 04:41:35.272541 | orchestrator | Tuesday 31 March 2026 04:41:30 +0000 (0:00:00.122) 0:07:02.897 ********* 2026-03-31 04:41:35.272552 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:35.272563 | orchestrator | 2026-03-31 04:41:35.272574 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-31 04:41:35.272585 | orchestrator | Tuesday 31 March 2026 04:41:30 +0000 (0:00:00.496) 0:07:03.393 ********* 2026-03-31 04:41:35.272596 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:41:35.272607 | orchestrator | 2026-03-31 04:41:35.272617 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-31 04:41:35.272628 | orchestrator | Tuesday 31 March 2026 04:41:31 +0000 (0:00:00.933) 0:07:04.327 ********* 2026-03-31 04:41:35.272639 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:41:35.272650 | orchestrator | 2026-03-31 04:41:35.272661 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-31 04:41:35.272680 | orchestrator | Tuesday 31 March 2026 04:41:33 +0000 (0:00:01.405) 0:07:05.733 ********* 2026-03-31 04:41:35.272691 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-03-31 04:41:35.272703 | orchestrator | 2026-03-31 04:41:35.272714 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-31 04:41:35.272725 | orchestrator | Tuesday 31 March 2026 04:41:33 +0000 (0:00:00.203) 0:07:05.936 ********* 2026-03-31 04:41:35.272736 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:35.272747 | orchestrator | 2026-03-31 04:41:35.272758 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-31 04:41:35.272768 | orchestrator | Tuesday 31 March 2026 04:41:33 +0000 (0:00:00.146) 0:07:06.083 ********* 2026-03-31 04:41:35.272779 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:35.272790 | orchestrator | 2026-03-31 04:41:35.272801 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-31 04:41:35.272812 | orchestrator | Tuesday 31 March 2026 04:41:33 +0000 (0:00:00.144) 0:07:06.228 ********* 2026-03-31 04:41:35.272823 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-31 04:41:35.272833 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-31 04:41:35.272844 | orchestrator | 2026-03-31 04:41:35.272855 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-31 04:41:35.272866 | orchestrator | Tuesday 31 March 2026 04:41:34 +0000 (0:00:00.849) 0:07:07.078 ********* 2026-03-31 04:41:35.272877 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:41:35.272888 | orchestrator | 2026-03-31 04:41:35.272899 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-31 04:41:35.272910 | orchestrator | Tuesday 31 March 2026 04:41:34 +0000 (0:00:00.468) 0:07:07.546 ********* 2026-03-31 04:41:35.272921 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:35.272932 | orchestrator | 2026-03-31 04:41:35.272943 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-31 04:41:35.272954 | orchestrator | Tuesday 31 March 2026 04:41:34 +0000 (0:00:00.129) 0:07:07.676 ********* 2026-03-31 04:41:35.272964 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:35.272975 | orchestrator | 2026-03-31 04:41:35.272986 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-31 04:41:35.272997 | orchestrator | Tuesday 31 March 2026 04:41:35 +0000 (0:00:00.131) 0:07:07.808 ********* 2026-03-31 04:41:35.273015 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:50.098998 | orchestrator | 2026-03-31 04:41:50.099113 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-31 04:41:50.099131 | orchestrator | Tuesday 31 March 2026 04:41:35 +0000 (0:00:00.133) 0:07:07.941 ********* 2026-03-31 04:41:50.099158 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-03-31 04:41:50.099230 | orchestrator | 2026-03-31 04:41:50.099243 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-31 04:41:50.099255 | orchestrator | Tuesday 31 March 2026 04:41:35 +0000 (0:00:00.501) 0:07:08.442 ********* 2026-03-31 04:41:50.099266 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:41:50.099279 | orchestrator | 2026-03-31 04:41:50.099291 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-31 04:41:50.099303 | orchestrator | Tuesday 31 March 2026 04:41:37 +0000 (0:00:01.845) 0:07:10.288 ********* 2026-03-31 04:41:50.099314 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-31 04:41:50.099325 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-31 04:41:50.099336 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-31 04:41:50.099347 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:50.099359 | orchestrator | 2026-03-31 04:41:50.099370 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-31 04:41:50.099405 | orchestrator | Tuesday 31 March 2026 04:41:37 +0000 (0:00:00.158) 0:07:10.447 ********* 2026-03-31 04:41:50.099432 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:50.099444 | orchestrator | 2026-03-31 04:41:50.099455 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-31 04:41:50.099466 | orchestrator | Tuesday 31 March 2026 04:41:37 +0000 (0:00:00.148) 0:07:10.596 ********* 2026-03-31 04:41:50.099478 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:50.099489 | orchestrator | 2026-03-31 04:41:50.099501 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-31 04:41:50.099512 | orchestrator | Tuesday 31 March 2026 04:41:38 +0000 (0:00:00.180) 0:07:10.777 ********* 2026-03-31 04:41:50.099524 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:50.099537 | orchestrator | 2026-03-31 04:41:50.099550 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-31 04:41:50.099564 | orchestrator | Tuesday 31 March 2026 04:41:38 +0000 (0:00:00.149) 0:07:10.926 ********* 2026-03-31 04:41:50.099577 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:50.099589 | orchestrator | 2026-03-31 04:41:50.099602 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-31 04:41:50.099614 | orchestrator | Tuesday 31 March 2026 04:41:38 +0000 (0:00:00.169) 0:07:11.096 ********* 2026-03-31 04:41:50.099628 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:50.099640 | orchestrator | 2026-03-31 04:41:50.099654 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-31 04:41:50.099667 | orchestrator | Tuesday 31 March 2026 04:41:38 +0000 (0:00:00.160) 0:07:11.257 ********* 2026-03-31 04:41:50.099679 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:41:50.099692 | orchestrator | 2026-03-31 04:41:50.099708 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-31 04:41:50.099727 | orchestrator | Tuesday 31 March 2026 04:41:40 +0000 (0:00:01.580) 0:07:12.838 ********* 2026-03-31 04:41:50.099747 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:41:50.099827 | orchestrator | 2026-03-31 04:41:50.099843 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-31 04:41:50.099856 | orchestrator | Tuesday 31 March 2026 04:41:40 +0000 (0:00:00.130) 0:07:12.968 ********* 2026-03-31 04:41:50.099869 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-03-31 04:41:50.099894 | orchestrator | 2026-03-31 04:41:50.099907 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-31 04:41:50.099918 | orchestrator | Tuesday 31 March 2026 04:41:40 +0000 (0:00:00.232) 0:07:13.201 ********* 2026-03-31 04:41:50.099929 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:50.099940 | orchestrator | 2026-03-31 04:41:50.099951 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-31 04:41:50.099962 | orchestrator | Tuesday 31 March 2026 04:41:40 +0000 (0:00:00.134) 0:07:13.335 ********* 2026-03-31 04:41:50.099973 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:50.099984 | orchestrator | 2026-03-31 04:41:50.099995 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-31 04:41:50.100006 | orchestrator | Tuesday 31 March 2026 04:41:41 +0000 (0:00:00.434) 0:07:13.770 ********* 2026-03-31 04:41:50.100018 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:50.100029 | orchestrator | 2026-03-31 04:41:50.100040 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-31 04:41:50.100051 | orchestrator | Tuesday 31 March 2026 04:41:41 +0000 (0:00:00.144) 0:07:13.914 ********* 2026-03-31 04:41:50.100062 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:50.100073 | orchestrator | 2026-03-31 04:41:50.100084 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-31 04:41:50.100095 | orchestrator | Tuesday 31 March 2026 04:41:41 +0000 (0:00:00.160) 0:07:14.075 ********* 2026-03-31 04:41:50.100107 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:50.100138 | orchestrator | 2026-03-31 04:41:50.100157 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-31 04:41:50.100203 | orchestrator | Tuesday 31 March 2026 04:41:41 +0000 (0:00:00.157) 0:07:14.232 ********* 2026-03-31 04:41:50.100223 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:50.100239 | orchestrator | 2026-03-31 04:41:50.100266 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-31 04:41:50.100288 | orchestrator | Tuesday 31 March 2026 04:41:41 +0000 (0:00:00.150) 0:07:14.383 ********* 2026-03-31 04:41:50.100305 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:50.100322 | orchestrator | 2026-03-31 04:41:50.100339 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-31 04:41:50.100382 | orchestrator | Tuesday 31 March 2026 04:41:41 +0000 (0:00:00.157) 0:07:14.541 ********* 2026-03-31 04:41:50.100402 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:50.100419 | orchestrator | 2026-03-31 04:41:50.100436 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-31 04:41:50.100455 | orchestrator | Tuesday 31 March 2026 04:41:42 +0000 (0:00:00.151) 0:07:14.693 ********* 2026-03-31 04:41:50.100474 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:41:50.100492 | orchestrator | 2026-03-31 04:41:50.100509 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-31 04:41:50.100520 | orchestrator | Tuesday 31 March 2026 04:41:42 +0000 (0:00:00.229) 0:07:14.922 ********* 2026-03-31 04:41:50.100531 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-03-31 04:41:50.100543 | orchestrator | 2026-03-31 04:41:50.100554 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-31 04:41:50.100565 | orchestrator | Tuesday 31 March 2026 04:41:42 +0000 (0:00:00.199) 0:07:15.121 ********* 2026-03-31 04:41:50.100576 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-03-31 04:41:50.100588 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-31 04:41:50.100599 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-31 04:41:50.100610 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-31 04:41:50.100621 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-31 04:41:50.100632 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-31 04:41:50.100652 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-31 04:41:50.100664 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-31 04:41:50.100676 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-31 04:41:50.100687 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-31 04:41:50.100698 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-31 04:41:50.100708 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-31 04:41:50.100720 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-31 04:41:50.100731 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-31 04:41:50.100742 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-03-31 04:41:50.100753 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-03-31 04:41:50.100764 | orchestrator | 2026-03-31 04:41:50.100775 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-31 04:41:50.100786 | orchestrator | Tuesday 31 March 2026 04:41:48 +0000 (0:00:05.881) 0:07:21.002 ********* 2026-03-31 04:41:50.100797 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:50.100809 | orchestrator | 2026-03-31 04:41:50.100820 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-31 04:41:50.100831 | orchestrator | Tuesday 31 March 2026 04:41:48 +0000 (0:00:00.133) 0:07:21.136 ********* 2026-03-31 04:41:50.100842 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:50.100853 | orchestrator | 2026-03-31 04:41:50.100864 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-31 04:41:50.100887 | orchestrator | Tuesday 31 March 2026 04:41:48 +0000 (0:00:00.129) 0:07:21.265 ********* 2026-03-31 04:41:50.100898 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:50.100909 | orchestrator | 2026-03-31 04:41:50.100920 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-31 04:41:50.100931 | orchestrator | Tuesday 31 March 2026 04:41:49 +0000 (0:00:00.417) 0:07:21.683 ********* 2026-03-31 04:41:50.100942 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:50.100953 | orchestrator | 2026-03-31 04:41:50.100964 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-31 04:41:50.100975 | orchestrator | Tuesday 31 March 2026 04:41:49 +0000 (0:00:00.136) 0:07:21.819 ********* 2026-03-31 04:41:50.100986 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:50.100997 | orchestrator | 2026-03-31 04:41:50.101008 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-31 04:41:50.101019 | orchestrator | Tuesday 31 March 2026 04:41:49 +0000 (0:00:00.123) 0:07:21.942 ********* 2026-03-31 04:41:50.101030 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:50.101041 | orchestrator | 2026-03-31 04:41:50.101052 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-31 04:41:50.101063 | orchestrator | Tuesday 31 March 2026 04:41:49 +0000 (0:00:00.142) 0:07:22.084 ********* 2026-03-31 04:41:50.101074 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:50.101085 | orchestrator | 2026-03-31 04:41:50.101096 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-31 04:41:50.101108 | orchestrator | Tuesday 31 March 2026 04:41:49 +0000 (0:00:00.128) 0:07:22.213 ********* 2026-03-31 04:41:50.101119 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:50.101130 | orchestrator | 2026-03-31 04:41:50.101141 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-31 04:41:50.101153 | orchestrator | Tuesday 31 March 2026 04:41:49 +0000 (0:00:00.124) 0:07:22.337 ********* 2026-03-31 04:41:50.101164 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:50.101224 | orchestrator | 2026-03-31 04:41:50.101243 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-31 04:41:50.101261 | orchestrator | Tuesday 31 March 2026 04:41:49 +0000 (0:00:00.141) 0:07:22.478 ********* 2026-03-31 04:41:50.101277 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:50.101289 | orchestrator | 2026-03-31 04:41:50.101300 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-31 04:41:50.101312 | orchestrator | Tuesday 31 March 2026 04:41:49 +0000 (0:00:00.134) 0:07:22.613 ********* 2026-03-31 04:41:50.101329 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:41:50.101348 | orchestrator | 2026-03-31 04:41:50.101394 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-31 04:42:08.242964 | orchestrator | Tuesday 31 March 2026 04:41:50 +0000 (0:00:00.147) 0:07:22.761 ********* 2026-03-31 04:42:08.243063 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:42:08.243075 | orchestrator | 2026-03-31 04:42:08.243086 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-31 04:42:08.243095 | orchestrator | Tuesday 31 March 2026 04:41:50 +0000 (0:00:00.136) 0:07:22.897 ********* 2026-03-31 04:42:08.243103 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:42:08.243112 | orchestrator | 2026-03-31 04:42:08.243121 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-31 04:42:08.243129 | orchestrator | Tuesday 31 March 2026 04:41:50 +0000 (0:00:00.241) 0:07:23.139 ********* 2026-03-31 04:42:08.243137 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:42:08.243145 | orchestrator | 2026-03-31 04:42:08.243153 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-31 04:42:08.243161 | orchestrator | Tuesday 31 March 2026 04:41:50 +0000 (0:00:00.142) 0:07:23.281 ********* 2026-03-31 04:42:08.243169 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:42:08.243265 | orchestrator | 2026-03-31 04:42:08.243275 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-31 04:42:08.243295 | orchestrator | Tuesday 31 March 2026 04:41:50 +0000 (0:00:00.226) 0:07:23.508 ********* 2026-03-31 04:42:08.243303 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:42:08.243312 | orchestrator | 2026-03-31 04:42:08.243320 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-31 04:42:08.243340 | orchestrator | Tuesday 31 March 2026 04:41:51 +0000 (0:00:00.420) 0:07:23.928 ********* 2026-03-31 04:42:08.243349 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:42:08.243357 | orchestrator | 2026-03-31 04:42:08.243368 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-31 04:42:08.243383 | orchestrator | Tuesday 31 March 2026 04:41:51 +0000 (0:00:00.133) 0:07:24.062 ********* 2026-03-31 04:42:08.243397 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:42:08.243409 | orchestrator | 2026-03-31 04:42:08.243422 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-31 04:42:08.243435 | orchestrator | Tuesday 31 March 2026 04:41:51 +0000 (0:00:00.159) 0:07:24.221 ********* 2026-03-31 04:42:08.243447 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:42:08.243459 | orchestrator | 2026-03-31 04:42:08.243472 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-31 04:42:08.243485 | orchestrator | Tuesday 31 March 2026 04:41:51 +0000 (0:00:00.123) 0:07:24.345 ********* 2026-03-31 04:42:08.243498 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:42:08.243511 | orchestrator | 2026-03-31 04:42:08.243525 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-31 04:42:08.243540 | orchestrator | Tuesday 31 March 2026 04:41:51 +0000 (0:00:00.136) 0:07:24.481 ********* 2026-03-31 04:42:08.243553 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:42:08.243565 | orchestrator | 2026-03-31 04:42:08.243575 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-31 04:42:08.243585 | orchestrator | Tuesday 31 March 2026 04:41:51 +0000 (0:00:00.130) 0:07:24.612 ********* 2026-03-31 04:42:08.243595 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-31 04:42:08.243605 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-31 04:42:08.243614 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-31 04:42:08.243623 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:42:08.243631 | orchestrator | 2026-03-31 04:42:08.243639 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-31 04:42:08.243647 | orchestrator | Tuesday 31 March 2026 04:41:52 +0000 (0:00:00.425) 0:07:25.037 ********* 2026-03-31 04:42:08.243656 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-31 04:42:08.243669 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-31 04:42:08.243681 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-31 04:42:08.243689 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:42:08.243696 | orchestrator | 2026-03-31 04:42:08.243705 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-31 04:42:08.243712 | orchestrator | Tuesday 31 March 2026 04:41:52 +0000 (0:00:00.421) 0:07:25.458 ********* 2026-03-31 04:42:08.243720 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-31 04:42:08.243728 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-31 04:42:08.243736 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-31 04:42:08.243744 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:42:08.243752 | orchestrator | 2026-03-31 04:42:08.243760 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-31 04:42:08.243768 | orchestrator | Tuesday 31 March 2026 04:41:53 +0000 (0:00:00.402) 0:07:25.860 ********* 2026-03-31 04:42:08.243776 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:42:08.243792 | orchestrator | 2026-03-31 04:42:08.243800 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-31 04:42:08.243808 | orchestrator | Tuesday 31 March 2026 04:41:53 +0000 (0:00:00.143) 0:07:26.004 ********* 2026-03-31 04:42:08.243817 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-31 04:42:08.243825 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:42:08.243833 | orchestrator | 2026-03-31 04:42:08.243841 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-31 04:42:08.243849 | orchestrator | Tuesday 31 March 2026 04:41:53 +0000 (0:00:00.334) 0:07:26.339 ********* 2026-03-31 04:42:08.243858 | orchestrator | changed: [testbed-node-2] 2026-03-31 04:42:08.243872 | orchestrator | 2026-03-31 04:42:08.243884 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-31 04:42:08.243898 | orchestrator | Tuesday 31 March 2026 04:41:54 +0000 (0:00:01.194) 0:07:27.533 ********* 2026-03-31 04:42:08.243910 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:42:08.243923 | orchestrator | 2026-03-31 04:42:08.243935 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-31 04:42:08.243968 | orchestrator | Tuesday 31 March 2026 04:41:55 +0000 (0:00:00.186) 0:07:27.720 ********* 2026-03-31 04:42:08.243983 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-2 2026-03-31 04:42:08.243998 | orchestrator | 2026-03-31 04:42:08.244013 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-31 04:42:08.244025 | orchestrator | Tuesday 31 March 2026 04:41:55 +0000 (0:00:00.283) 0:07:28.004 ********* 2026-03-31 04:42:08.244038 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:42:08.244051 | orchestrator | 2026-03-31 04:42:08.244063 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-31 04:42:08.244076 | orchestrator | Tuesday 31 March 2026 04:41:57 +0000 (0:00:02.215) 0:07:30.219 ********* 2026-03-31 04:42:08.244089 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:42:08.244102 | orchestrator | 2026-03-31 04:42:08.244115 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-31 04:42:08.244129 | orchestrator | Tuesday 31 March 2026 04:41:57 +0000 (0:00:00.174) 0:07:30.394 ********* 2026-03-31 04:42:08.244142 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:42:08.244155 | orchestrator | 2026-03-31 04:42:08.244168 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-31 04:42:08.244181 | orchestrator | Tuesday 31 March 2026 04:41:57 +0000 (0:00:00.148) 0:07:30.543 ********* 2026-03-31 04:42:08.244225 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:42:08.244238 | orchestrator | 2026-03-31 04:42:08.244251 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-31 04:42:08.244274 | orchestrator | Tuesday 31 March 2026 04:41:58 +0000 (0:00:00.161) 0:07:30.705 ********* 2026-03-31 04:42:08.244288 | orchestrator | changed: [testbed-node-2] 2026-03-31 04:42:08.244298 | orchestrator | 2026-03-31 04:42:08.244309 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-31 04:42:08.244320 | orchestrator | Tuesday 31 March 2026 04:41:59 +0000 (0:00:01.045) 0:07:31.750 ********* 2026-03-31 04:42:08.244333 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:42:08.244346 | orchestrator | 2026-03-31 04:42:08.244359 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-31 04:42:08.244372 | orchestrator | Tuesday 31 March 2026 04:41:59 +0000 (0:00:00.634) 0:07:32.384 ********* 2026-03-31 04:42:08.244385 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:42:08.244399 | orchestrator | 2026-03-31 04:42:08.244412 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-31 04:42:08.244425 | orchestrator | Tuesday 31 March 2026 04:42:00 +0000 (0:00:00.470) 0:07:32.855 ********* 2026-03-31 04:42:08.244437 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:42:08.244445 | orchestrator | 2026-03-31 04:42:08.244453 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-31 04:42:08.244461 | orchestrator | Tuesday 31 March 2026 04:42:00 +0000 (0:00:00.505) 0:07:33.360 ********* 2026-03-31 04:42:08.244478 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-31 04:42:08.244486 | orchestrator | 2026-03-31 04:42:08.244494 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-31 04:42:08.244502 | orchestrator | Tuesday 31 March 2026 04:42:01 +0000 (0:00:00.569) 0:07:33.930 ********* 2026-03-31 04:42:08.244510 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-31 04:42:08.244518 | orchestrator | 2026-03-31 04:42:08.244526 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-31 04:42:08.244534 | orchestrator | Tuesday 31 March 2026 04:42:02 +0000 (0:00:01.178) 0:07:35.108 ********* 2026-03-31 04:42:08.244542 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 04:42:08.244550 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-31 04:42:08.244559 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-31 04:42:08.244567 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-03-31 04:42:08.244575 | orchestrator | 2026-03-31 04:42:08.244583 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-31 04:42:08.244591 | orchestrator | Tuesday 31 March 2026 04:42:05 +0000 (0:00:02.903) 0:07:38.012 ********* 2026-03-31 04:42:08.244599 | orchestrator | changed: [testbed-node-2] 2026-03-31 04:42:08.244607 | orchestrator | 2026-03-31 04:42:08.244615 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-31 04:42:08.244623 | orchestrator | Tuesday 31 March 2026 04:42:06 +0000 (0:00:00.999) 0:07:39.011 ********* 2026-03-31 04:42:08.244631 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:42:08.244639 | orchestrator | 2026-03-31 04:42:08.244647 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-31 04:42:08.244655 | orchestrator | Tuesday 31 March 2026 04:42:06 +0000 (0:00:00.150) 0:07:39.162 ********* 2026-03-31 04:42:08.244662 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:42:08.244670 | orchestrator | 2026-03-31 04:42:08.244678 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-31 04:42:08.244686 | orchestrator | Tuesday 31 March 2026 04:42:06 +0000 (0:00:00.144) 0:07:39.307 ********* 2026-03-31 04:42:08.244694 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:42:08.244702 | orchestrator | 2026-03-31 04:42:08.244710 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-31 04:42:08.244718 | orchestrator | Tuesday 31 March 2026 04:42:07 +0000 (0:00:00.744) 0:07:40.052 ********* 2026-03-31 04:42:08.244726 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:42:08.244734 | orchestrator | 2026-03-31 04:42:08.244742 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-31 04:42:08.244750 | orchestrator | Tuesday 31 March 2026 04:42:07 +0000 (0:00:00.474) 0:07:40.527 ********* 2026-03-31 04:42:08.244758 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:42:08.244766 | orchestrator | 2026-03-31 04:42:08.244773 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-31 04:42:08.244781 | orchestrator | Tuesday 31 March 2026 04:42:07 +0000 (0:00:00.149) 0:07:40.676 ********* 2026-03-31 04:42:08.244789 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-2 2026-03-31 04:42:08.244797 | orchestrator | 2026-03-31 04:42:08.244816 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-31 04:42:54.948928 | orchestrator | Tuesday 31 March 2026 04:42:08 +0000 (0:00:00.235) 0:07:40.911 ********* 2026-03-31 04:42:54.949073 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:42:54.949099 | orchestrator | 2026-03-31 04:42:54.949118 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-31 04:42:54.949137 | orchestrator | Tuesday 31 March 2026 04:42:08 +0000 (0:00:00.117) 0:07:41.029 ********* 2026-03-31 04:42:54.949155 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:42:54.949174 | orchestrator | 2026-03-31 04:42:54.949193 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-31 04:42:54.949274 | orchestrator | Tuesday 31 March 2026 04:42:08 +0000 (0:00:00.138) 0:07:41.168 ********* 2026-03-31 04:42:54.949294 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-2 2026-03-31 04:42:54.949310 | orchestrator | 2026-03-31 04:42:54.949327 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-31 04:42:54.949345 | orchestrator | Tuesday 31 March 2026 04:42:08 +0000 (0:00:00.469) 0:07:41.637 ********* 2026-03-31 04:42:54.949361 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:42:54.949378 | orchestrator | 2026-03-31 04:42:54.949394 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-31 04:42:54.949411 | orchestrator | Tuesday 31 March 2026 04:42:10 +0000 (0:00:01.263) 0:07:42.900 ********* 2026-03-31 04:42:54.949429 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:42:54.949448 | orchestrator | 2026-03-31 04:42:54.949465 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-31 04:42:54.949501 | orchestrator | Tuesday 31 March 2026 04:42:11 +0000 (0:00:00.961) 0:07:43.862 ********* 2026-03-31 04:42:54.949520 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:42:54.949538 | orchestrator | 2026-03-31 04:42:54.949555 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-31 04:42:54.949572 | orchestrator | Tuesday 31 March 2026 04:42:12 +0000 (0:00:01.389) 0:07:45.252 ********* 2026-03-31 04:42:54.949588 | orchestrator | changed: [testbed-node-2] 2026-03-31 04:42:54.949604 | orchestrator | 2026-03-31 04:42:54.949620 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-31 04:42:54.949635 | orchestrator | Tuesday 31 March 2026 04:42:14 +0000 (0:00:02.248) 0:07:47.500 ********* 2026-03-31 04:42:54.949650 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-2 2026-03-31 04:42:54.949667 | orchestrator | 2026-03-31 04:42:54.949685 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-31 04:42:54.949703 | orchestrator | Tuesday 31 March 2026 04:42:15 +0000 (0:00:00.218) 0:07:47.719 ********* 2026-03-31 04:42:54.949721 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-03-31 04:42:54.949738 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:42:54.949756 | orchestrator | 2026-03-31 04:42:54.949774 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-31 04:42:54.949791 | orchestrator | Tuesday 31 March 2026 04:42:37 +0000 (0:00:21.994) 0:08:09.714 ********* 2026-03-31 04:42:54.949808 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:42:54.949825 | orchestrator | 2026-03-31 04:42:54.949842 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-31 04:42:54.949857 | orchestrator | Tuesday 31 March 2026 04:42:39 +0000 (0:00:02.063) 0:08:11.777 ********* 2026-03-31 04:42:54.949873 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:42:54.949889 | orchestrator | 2026-03-31 04:42:54.949906 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-31 04:42:54.949922 | orchestrator | Tuesday 31 March 2026 04:42:39 +0000 (0:00:00.133) 0:08:11.910 ********* 2026-03-31 04:42:54.949942 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__844b88a37a697fc95420139c4fef42975660f41e'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-31 04:42:54.949961 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__844b88a37a697fc95420139c4fef42975660f41e'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-31 04:42:54.949978 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__844b88a37a697fc95420139c4fef42975660f41e'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-31 04:42:54.950013 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__844b88a37a697fc95420139c4fef42975660f41e'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-31 04:42:54.950142 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__844b88a37a697fc95420139c4fef42975660f41e'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-31 04:42:54.950163 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__844b88a37a697fc95420139c4fef42975660f41e'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__844b88a37a697fc95420139c4fef42975660f41e'}])  2026-03-31 04:42:54.950180 | orchestrator | 2026-03-31 04:42:54.950190 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-03-31 04:42:54.950200 | orchestrator | Tuesday 31 March 2026 04:42:47 +0000 (0:00:08.662) 0:08:20.573 ********* 2026-03-31 04:42:54.950210 | orchestrator | changed: [testbed-node-2] 2026-03-31 04:42:54.950220 | orchestrator | 2026-03-31 04:42:54.950281 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-31 04:42:54.950309 | orchestrator | Tuesday 31 March 2026 04:42:49 +0000 (0:00:01.531) 0:08:22.104 ********* 2026-03-31 04:42:54.950326 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:42:54.950342 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-03-31 04:42:54.950358 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-03-31 04:42:54.950374 | orchestrator | 2026-03-31 04:42:54.950389 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-31 04:42:54.950399 | orchestrator | Tuesday 31 March 2026 04:42:50 +0000 (0:00:01.169) 0:08:23.274 ********* 2026-03-31 04:42:54.950409 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-31 04:42:54.950419 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-31 04:42:54.950429 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-31 04:42:54.950439 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:42:54.950448 | orchestrator | 2026-03-31 04:42:54.950458 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-03-31 04:42:54.950468 | orchestrator | Tuesday 31 March 2026 04:42:51 +0000 (0:00:01.047) 0:08:24.321 ********* 2026-03-31 04:42:54.950478 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:42:54.950488 | orchestrator | 2026-03-31 04:42:54.950497 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-03-31 04:42:54.950507 | orchestrator | Tuesday 31 March 2026 04:42:51 +0000 (0:00:00.129) 0:08:24.451 ********* 2026-03-31 04:42:54.950517 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:42:54.950527 | orchestrator | 2026-03-31 04:42:54.950536 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-31 04:42:54.950546 | orchestrator | Tuesday 31 March 2026 04:42:53 +0000 (0:00:01.383) 0:08:25.835 ********* 2026-03-31 04:42:54.950556 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:42:54.950566 | orchestrator | 2026-03-31 04:42:54.950585 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-31 04:42:54.950595 | orchestrator | Tuesday 31 March 2026 04:42:53 +0000 (0:00:00.130) 0:08:25.965 ********* 2026-03-31 04:42:54.950605 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:42:54.950615 | orchestrator | 2026-03-31 04:42:54.950624 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-31 04:42:54.950634 | orchestrator | Tuesday 31 March 2026 04:42:53 +0000 (0:00:00.131) 0:08:26.097 ********* 2026-03-31 04:42:54.950644 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:42:54.950654 | orchestrator | 2026-03-31 04:42:54.950664 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-31 04:42:54.950673 | orchestrator | Tuesday 31 March 2026 04:42:53 +0000 (0:00:00.131) 0:08:26.229 ********* 2026-03-31 04:42:54.950683 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:42:54.950693 | orchestrator | 2026-03-31 04:42:54.950702 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-31 04:42:54.950712 | orchestrator | Tuesday 31 March 2026 04:42:53 +0000 (0:00:00.132) 0:08:26.362 ********* 2026-03-31 04:42:54.950722 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:42:54.950732 | orchestrator | 2026-03-31 04:42:54.950741 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-31 04:42:54.950749 | orchestrator | Tuesday 31 March 2026 04:42:53 +0000 (0:00:00.138) 0:08:26.500 ********* 2026-03-31 04:42:54.950757 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:42:54.950765 | orchestrator | 2026-03-31 04:42:54.950773 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-31 04:42:54.950781 | orchestrator | Tuesday 31 March 2026 04:42:53 +0000 (0:00:00.129) 0:08:26.629 ********* 2026-03-31 04:42:54.950789 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:42:54.950797 | orchestrator | 2026-03-31 04:42:54.950805 | orchestrator | PLAY [Reset mon_host] ********************************************************** 2026-03-31 04:42:54.950813 | orchestrator | 2026-03-31 04:42:54.950821 | orchestrator | TASK [Reset mon_host fact] ***************************************************** 2026-03-31 04:42:54.950829 | orchestrator | Tuesday 31 March 2026 04:42:54 +0000 (0:00:00.622) 0:08:27.252 ********* 2026-03-31 04:42:54.950837 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:42:54.950845 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:42:54.950853 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:42:54.950861 | orchestrator | 2026-03-31 04:42:54.950869 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-03-31 04:42:54.950877 | orchestrator | 2026-03-31 04:42:54.950885 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-03-31 04:42:54.950901 | orchestrator | Tuesday 31 March 2026 04:42:54 +0000 (0:00:00.358) 0:08:27.610 ********* 2026-03-31 04:43:01.911720 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:01.911829 | orchestrator | 2026-03-31 04:43:01.911860 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-31 04:43:01.911874 | orchestrator | Tuesday 31 March 2026 04:42:55 +0000 (0:00:00.221) 0:08:27.832 ********* 2026-03-31 04:43:01.911886 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:01.911897 | orchestrator | 2026-03-31 04:43:01.911908 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-31 04:43:01.911919 | orchestrator | Tuesday 31 March 2026 04:42:55 +0000 (0:00:00.231) 0:08:28.064 ********* 2026-03-31 04:43:01.911930 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:01.911941 | orchestrator | 2026-03-31 04:43:01.911953 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-31 04:43:01.911964 | orchestrator | Tuesday 31 March 2026 04:42:55 +0000 (0:00:00.144) 0:08:28.208 ********* 2026-03-31 04:43:01.911975 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:01.911986 | orchestrator | 2026-03-31 04:43:01.911997 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-31 04:43:01.912009 | orchestrator | Tuesday 31 March 2026 04:42:55 +0000 (0:00:00.136) 0:08:28.344 ********* 2026-03-31 04:43:01.912043 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:01.912055 | orchestrator | 2026-03-31 04:43:01.912066 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-31 04:43:01.912077 | orchestrator | Tuesday 31 March 2026 04:42:55 +0000 (0:00:00.140) 0:08:28.485 ********* 2026-03-31 04:43:01.912101 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:01.912112 | orchestrator | 2026-03-31 04:43:01.912123 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-31 04:43:01.912134 | orchestrator | Tuesday 31 March 2026 04:42:55 +0000 (0:00:00.156) 0:08:28.642 ********* 2026-03-31 04:43:01.912145 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:01.912156 | orchestrator | 2026-03-31 04:43:01.912167 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-31 04:43:01.912178 | orchestrator | Tuesday 31 March 2026 04:42:56 +0000 (0:00:00.427) 0:08:29.069 ********* 2026-03-31 04:43:01.912189 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:01.912200 | orchestrator | 2026-03-31 04:43:01.912211 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-31 04:43:01.912222 | orchestrator | Tuesday 31 March 2026 04:42:56 +0000 (0:00:00.138) 0:08:29.208 ********* 2026-03-31 04:43:01.912261 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:01.912275 | orchestrator | 2026-03-31 04:43:01.912288 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-31 04:43:01.912300 | orchestrator | Tuesday 31 March 2026 04:42:56 +0000 (0:00:00.130) 0:08:29.338 ********* 2026-03-31 04:43:01.912313 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:01.912326 | orchestrator | 2026-03-31 04:43:01.912339 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-31 04:43:01.912351 | orchestrator | Tuesday 31 March 2026 04:42:56 +0000 (0:00:00.136) 0:08:29.475 ********* 2026-03-31 04:43:01.912363 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:01.912376 | orchestrator | 2026-03-31 04:43:01.912389 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-31 04:43:01.912402 | orchestrator | Tuesday 31 March 2026 04:42:56 +0000 (0:00:00.140) 0:08:29.616 ********* 2026-03-31 04:43:01.912414 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:01.912427 | orchestrator | 2026-03-31 04:43:01.912440 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-31 04:43:01.912453 | orchestrator | Tuesday 31 March 2026 04:42:57 +0000 (0:00:00.212) 0:08:29.829 ********* 2026-03-31 04:43:01.912466 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:01.912478 | orchestrator | 2026-03-31 04:43:01.912491 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-31 04:43:01.912504 | orchestrator | Tuesday 31 March 2026 04:42:57 +0000 (0:00:00.147) 0:08:29.977 ********* 2026-03-31 04:43:01.912517 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:01.912529 | orchestrator | 2026-03-31 04:43:01.912542 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-31 04:43:01.912555 | orchestrator | Tuesday 31 March 2026 04:42:57 +0000 (0:00:00.131) 0:08:30.108 ********* 2026-03-31 04:43:01.912568 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:01.912580 | orchestrator | 2026-03-31 04:43:01.912593 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-31 04:43:01.912606 | orchestrator | Tuesday 31 March 2026 04:42:57 +0000 (0:00:00.153) 0:08:30.262 ********* 2026-03-31 04:43:01.912617 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:01.912628 | orchestrator | 2026-03-31 04:43:01.912639 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-31 04:43:01.912650 | orchestrator | Tuesday 31 March 2026 04:42:57 +0000 (0:00:00.145) 0:08:30.407 ********* 2026-03-31 04:43:01.912661 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:01.912672 | orchestrator | 2026-03-31 04:43:01.912683 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-31 04:43:01.912695 | orchestrator | Tuesday 31 March 2026 04:42:57 +0000 (0:00:00.134) 0:08:30.542 ********* 2026-03-31 04:43:01.912715 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:01.912726 | orchestrator | 2026-03-31 04:43:01.912737 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-31 04:43:01.912748 | orchestrator | Tuesday 31 March 2026 04:42:57 +0000 (0:00:00.129) 0:08:30.671 ********* 2026-03-31 04:43:01.912759 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:01.912770 | orchestrator | 2026-03-31 04:43:01.912781 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-31 04:43:01.912793 | orchestrator | Tuesday 31 March 2026 04:42:58 +0000 (0:00:00.449) 0:08:31.121 ********* 2026-03-31 04:43:01.912804 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:01.912815 | orchestrator | 2026-03-31 04:43:01.912826 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-31 04:43:01.912837 | orchestrator | Tuesday 31 March 2026 04:42:58 +0000 (0:00:00.130) 0:08:31.252 ********* 2026-03-31 04:43:01.912848 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:01.912859 | orchestrator | 2026-03-31 04:43:01.912887 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-31 04:43:01.912898 | orchestrator | Tuesday 31 March 2026 04:42:58 +0000 (0:00:00.145) 0:08:31.397 ********* 2026-03-31 04:43:01.912909 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:01.912920 | orchestrator | 2026-03-31 04:43:01.912931 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-31 04:43:01.912942 | orchestrator | Tuesday 31 March 2026 04:42:58 +0000 (0:00:00.140) 0:08:31.537 ********* 2026-03-31 04:43:01.912953 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:01.912964 | orchestrator | 2026-03-31 04:43:01.912974 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-31 04:43:01.912985 | orchestrator | Tuesday 31 March 2026 04:42:58 +0000 (0:00:00.136) 0:08:31.674 ********* 2026-03-31 04:43:01.912996 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:01.913007 | orchestrator | 2026-03-31 04:43:01.913017 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-31 04:43:01.913028 | orchestrator | Tuesday 31 March 2026 04:42:59 +0000 (0:00:00.203) 0:08:31.878 ********* 2026-03-31 04:43:01.913039 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:01.913050 | orchestrator | 2026-03-31 04:43:01.913060 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-31 04:43:01.913072 | orchestrator | Tuesday 31 March 2026 04:42:59 +0000 (0:00:00.125) 0:08:32.004 ********* 2026-03-31 04:43:01.913083 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:01.913093 | orchestrator | 2026-03-31 04:43:01.913111 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-31 04:43:01.913122 | orchestrator | Tuesday 31 March 2026 04:42:59 +0000 (0:00:00.118) 0:08:32.123 ********* 2026-03-31 04:43:01.913133 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:01.913144 | orchestrator | 2026-03-31 04:43:01.913154 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-31 04:43:01.913165 | orchestrator | Tuesday 31 March 2026 04:42:59 +0000 (0:00:00.136) 0:08:32.259 ********* 2026-03-31 04:43:01.913176 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:01.913187 | orchestrator | 2026-03-31 04:43:01.913198 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-31 04:43:01.913208 | orchestrator | Tuesday 31 March 2026 04:42:59 +0000 (0:00:00.139) 0:08:32.399 ********* 2026-03-31 04:43:01.913219 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:01.913251 | orchestrator | 2026-03-31 04:43:01.913271 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-31 04:43:01.913290 | orchestrator | Tuesday 31 March 2026 04:42:59 +0000 (0:00:00.136) 0:08:32.535 ********* 2026-03-31 04:43:01.913308 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:01.913324 | orchestrator | 2026-03-31 04:43:01.913336 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-31 04:43:01.913347 | orchestrator | Tuesday 31 March 2026 04:42:59 +0000 (0:00:00.139) 0:08:32.675 ********* 2026-03-31 04:43:01.913366 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:01.913376 | orchestrator | 2026-03-31 04:43:01.913387 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-31 04:43:01.913398 | orchestrator | Tuesday 31 March 2026 04:43:00 +0000 (0:00:00.145) 0:08:32.821 ********* 2026-03-31 04:43:01.913409 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:01.913420 | orchestrator | 2026-03-31 04:43:01.913431 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-31 04:43:01.913442 | orchestrator | Tuesday 31 March 2026 04:43:00 +0000 (0:00:00.512) 0:08:33.333 ********* 2026-03-31 04:43:01.913453 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:01.913464 | orchestrator | 2026-03-31 04:43:01.913474 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-31 04:43:01.913485 | orchestrator | Tuesday 31 March 2026 04:43:00 +0000 (0:00:00.148) 0:08:33.482 ********* 2026-03-31 04:43:01.913496 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:01.913507 | orchestrator | 2026-03-31 04:43:01.913518 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-31 04:43:01.913529 | orchestrator | Tuesday 31 March 2026 04:43:00 +0000 (0:00:00.147) 0:08:33.630 ********* 2026-03-31 04:43:01.913540 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:01.913550 | orchestrator | 2026-03-31 04:43:01.913561 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-31 04:43:01.913572 | orchestrator | Tuesday 31 March 2026 04:43:01 +0000 (0:00:00.136) 0:08:33.766 ********* 2026-03-31 04:43:01.913583 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:01.913594 | orchestrator | 2026-03-31 04:43:01.913605 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-31 04:43:01.913616 | orchestrator | Tuesday 31 March 2026 04:43:01 +0000 (0:00:00.139) 0:08:33.906 ********* 2026-03-31 04:43:01.913627 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:01.913638 | orchestrator | 2026-03-31 04:43:01.913648 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-31 04:43:01.913659 | orchestrator | Tuesday 31 March 2026 04:43:01 +0000 (0:00:00.134) 0:08:34.041 ********* 2026-03-31 04:43:01.913670 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:01.913681 | orchestrator | 2026-03-31 04:43:01.913692 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-31 04:43:01.913703 | orchestrator | Tuesday 31 March 2026 04:43:01 +0000 (0:00:00.138) 0:08:34.179 ********* 2026-03-31 04:43:01.913713 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:01.913724 | orchestrator | 2026-03-31 04:43:01.913736 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-31 04:43:01.913747 | orchestrator | Tuesday 31 March 2026 04:43:01 +0000 (0:00:00.142) 0:08:34.322 ********* 2026-03-31 04:43:01.913758 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:01.913769 | orchestrator | 2026-03-31 04:43:01.913780 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-31 04:43:01.913791 | orchestrator | Tuesday 31 March 2026 04:43:01 +0000 (0:00:00.134) 0:08:34.456 ********* 2026-03-31 04:43:01.913802 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:01.913813 | orchestrator | 2026-03-31 04:43:01.913831 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-31 04:43:10.643132 | orchestrator | Tuesday 31 March 2026 04:43:01 +0000 (0:00:00.129) 0:08:34.585 ********* 2026-03-31 04:43:10.643291 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:10.643311 | orchestrator | 2026-03-31 04:43:10.643325 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-31 04:43:10.643337 | orchestrator | Tuesday 31 March 2026 04:43:02 +0000 (0:00:00.132) 0:08:34.718 ********* 2026-03-31 04:43:10.643348 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:10.643360 | orchestrator | 2026-03-31 04:43:10.643371 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-31 04:43:10.643407 | orchestrator | Tuesday 31 March 2026 04:43:02 +0000 (0:00:00.127) 0:08:34.846 ********* 2026-03-31 04:43:10.643419 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:10.643430 | orchestrator | 2026-03-31 04:43:10.643441 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-31 04:43:10.643452 | orchestrator | Tuesday 31 March 2026 04:43:02 +0000 (0:00:00.420) 0:08:35.266 ********* 2026-03-31 04:43:10.643463 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:10.643480 | orchestrator | 2026-03-31 04:43:10.643498 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-31 04:43:10.643517 | orchestrator | Tuesday 31 March 2026 04:43:02 +0000 (0:00:00.165) 0:08:35.431 ********* 2026-03-31 04:43:10.643536 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:10.643554 | orchestrator | 2026-03-31 04:43:10.643590 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-31 04:43:10.643608 | orchestrator | Tuesday 31 March 2026 04:43:02 +0000 (0:00:00.242) 0:08:35.674 ********* 2026-03-31 04:43:10.643628 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:10.643646 | orchestrator | 2026-03-31 04:43:10.643665 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-31 04:43:10.643684 | orchestrator | Tuesday 31 March 2026 04:43:03 +0000 (0:00:00.137) 0:08:35.812 ********* 2026-03-31 04:43:10.643703 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:10.643723 | orchestrator | 2026-03-31 04:43:10.643743 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-31 04:43:10.643762 | orchestrator | Tuesday 31 March 2026 04:43:03 +0000 (0:00:00.237) 0:08:36.049 ********* 2026-03-31 04:43:10.643779 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:10.643791 | orchestrator | 2026-03-31 04:43:10.643803 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-31 04:43:10.643814 | orchestrator | Tuesday 31 March 2026 04:43:03 +0000 (0:00:00.145) 0:08:36.194 ********* 2026-03-31 04:43:10.643825 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:10.643836 | orchestrator | 2026-03-31 04:43:10.643848 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-31 04:43:10.643860 | orchestrator | Tuesday 31 March 2026 04:43:03 +0000 (0:00:00.137) 0:08:36.332 ********* 2026-03-31 04:43:10.643872 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:10.643883 | orchestrator | 2026-03-31 04:43:10.643894 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-31 04:43:10.643904 | orchestrator | Tuesday 31 March 2026 04:43:03 +0000 (0:00:00.157) 0:08:36.490 ********* 2026-03-31 04:43:10.643915 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:10.643926 | orchestrator | 2026-03-31 04:43:10.643937 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-31 04:43:10.643948 | orchestrator | Tuesday 31 March 2026 04:43:03 +0000 (0:00:00.136) 0:08:36.626 ********* 2026-03-31 04:43:10.643959 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:10.643970 | orchestrator | 2026-03-31 04:43:10.643980 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-31 04:43:10.643991 | orchestrator | Tuesday 31 March 2026 04:43:04 +0000 (0:00:00.136) 0:08:36.763 ********* 2026-03-31 04:43:10.644002 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:10.644013 | orchestrator | 2026-03-31 04:43:10.644024 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-31 04:43:10.644035 | orchestrator | Tuesday 31 March 2026 04:43:04 +0000 (0:00:00.140) 0:08:36.904 ********* 2026-03-31 04:43:10.644046 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-31 04:43:10.644057 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-31 04:43:10.644068 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-31 04:43:10.644079 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:10.644101 | orchestrator | 2026-03-31 04:43:10.644112 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-31 04:43:10.644123 | orchestrator | Tuesday 31 March 2026 04:43:04 +0000 (0:00:00.713) 0:08:37.617 ********* 2026-03-31 04:43:10.644134 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-31 04:43:10.644145 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-31 04:43:10.644161 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-31 04:43:10.644180 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:10.644199 | orchestrator | 2026-03-31 04:43:10.644217 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-31 04:43:10.644278 | orchestrator | Tuesday 31 March 2026 04:43:05 +0000 (0:00:00.719) 0:08:38.336 ********* 2026-03-31 04:43:10.644300 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-31 04:43:10.644319 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-31 04:43:10.644333 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-31 04:43:10.644344 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:10.644355 | orchestrator | 2026-03-31 04:43:10.644366 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-31 04:43:10.644377 | orchestrator | Tuesday 31 March 2026 04:43:06 +0000 (0:00:01.097) 0:08:39.434 ********* 2026-03-31 04:43:10.644388 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:10.644399 | orchestrator | 2026-03-31 04:43:10.644410 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-31 04:43:10.644442 | orchestrator | Tuesday 31 March 2026 04:43:06 +0000 (0:00:00.146) 0:08:39.581 ********* 2026-03-31 04:43:10.644454 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-31 04:43:10.644465 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:10.644476 | orchestrator | 2026-03-31 04:43:10.644488 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-31 04:43:10.644498 | orchestrator | Tuesday 31 March 2026 04:43:07 +0000 (0:00:00.336) 0:08:39.917 ********* 2026-03-31 04:43:10.644509 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:10.644520 | orchestrator | 2026-03-31 04:43:10.644531 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-31 04:43:10.644542 | orchestrator | Tuesday 31 March 2026 04:43:07 +0000 (0:00:00.211) 0:08:40.128 ********* 2026-03-31 04:43:10.644553 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-31 04:43:10.644564 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-31 04:43:10.644575 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-31 04:43:10.644585 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:10.644596 | orchestrator | 2026-03-31 04:43:10.644607 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-31 04:43:10.644618 | orchestrator | Tuesday 31 March 2026 04:43:07 +0000 (0:00:00.397) 0:08:40.525 ********* 2026-03-31 04:43:10.644629 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:10.644640 | orchestrator | 2026-03-31 04:43:10.644659 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-31 04:43:10.644674 | orchestrator | Tuesday 31 March 2026 04:43:07 +0000 (0:00:00.140) 0:08:40.666 ********* 2026-03-31 04:43:10.644693 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:10.644711 | orchestrator | 2026-03-31 04:43:10.644730 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-31 04:43:10.644748 | orchestrator | Tuesday 31 March 2026 04:43:08 +0000 (0:00:00.137) 0:08:40.804 ********* 2026-03-31 04:43:10.644766 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:10.644783 | orchestrator | 2026-03-31 04:43:10.644800 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-31 04:43:10.644817 | orchestrator | Tuesday 31 March 2026 04:43:08 +0000 (0:00:00.138) 0:08:40.942 ********* 2026-03-31 04:43:10.644835 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:10.644854 | orchestrator | 2026-03-31 04:43:10.644884 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-03-31 04:43:10.644902 | orchestrator | 2026-03-31 04:43:10.644921 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-03-31 04:43:10.644940 | orchestrator | Tuesday 31 March 2026 04:43:08 +0000 (0:00:00.216) 0:08:41.159 ********* 2026-03-31 04:43:10.644959 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:10.644977 | orchestrator | 2026-03-31 04:43:10.644994 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-31 04:43:10.645013 | orchestrator | Tuesday 31 March 2026 04:43:08 +0000 (0:00:00.231) 0:08:41.391 ********* 2026-03-31 04:43:10.645032 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:10.645052 | orchestrator | 2026-03-31 04:43:10.645070 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-31 04:43:10.645090 | orchestrator | Tuesday 31 March 2026 04:43:09 +0000 (0:00:00.480) 0:08:41.871 ********* 2026-03-31 04:43:10.645101 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:10.645112 | orchestrator | 2026-03-31 04:43:10.645123 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-31 04:43:10.645134 | orchestrator | Tuesday 31 March 2026 04:43:09 +0000 (0:00:00.130) 0:08:42.002 ********* 2026-03-31 04:43:10.645145 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:10.645156 | orchestrator | 2026-03-31 04:43:10.645167 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-31 04:43:10.645178 | orchestrator | Tuesday 31 March 2026 04:43:09 +0000 (0:00:00.139) 0:08:42.141 ********* 2026-03-31 04:43:10.645189 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:10.645200 | orchestrator | 2026-03-31 04:43:10.645211 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-31 04:43:10.645225 | orchestrator | Tuesday 31 March 2026 04:43:09 +0000 (0:00:00.139) 0:08:42.280 ********* 2026-03-31 04:43:10.645273 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:10.645293 | orchestrator | 2026-03-31 04:43:10.645311 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-31 04:43:10.645330 | orchestrator | Tuesday 31 March 2026 04:43:09 +0000 (0:00:00.138) 0:08:42.419 ********* 2026-03-31 04:43:10.645349 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:10.645369 | orchestrator | 2026-03-31 04:43:10.645387 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-31 04:43:10.645401 | orchestrator | Tuesday 31 March 2026 04:43:09 +0000 (0:00:00.157) 0:08:42.576 ********* 2026-03-31 04:43:10.645412 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:10.645423 | orchestrator | 2026-03-31 04:43:10.645434 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-31 04:43:10.645445 | orchestrator | Tuesday 31 March 2026 04:43:10 +0000 (0:00:00.141) 0:08:42.718 ********* 2026-03-31 04:43:10.645456 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:10.645467 | orchestrator | 2026-03-31 04:43:10.645478 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-31 04:43:10.645489 | orchestrator | Tuesday 31 March 2026 04:43:10 +0000 (0:00:00.138) 0:08:42.856 ********* 2026-03-31 04:43:10.645499 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:10.645510 | orchestrator | 2026-03-31 04:43:10.645521 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-31 04:43:10.645532 | orchestrator | Tuesday 31 March 2026 04:43:10 +0000 (0:00:00.132) 0:08:42.988 ********* 2026-03-31 04:43:10.645543 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:10.645554 | orchestrator | 2026-03-31 04:43:10.645565 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-31 04:43:10.645576 | orchestrator | Tuesday 31 March 2026 04:43:10 +0000 (0:00:00.128) 0:08:43.117 ********* 2026-03-31 04:43:10.645587 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:10.645598 | orchestrator | 2026-03-31 04:43:10.645623 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-31 04:43:17.694465 | orchestrator | Tuesday 31 March 2026 04:43:10 +0000 (0:00:00.194) 0:08:43.311 ********* 2026-03-31 04:43:17.694687 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:17.694708 | orchestrator | 2026-03-31 04:43:17.694733 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-31 04:43:17.694745 | orchestrator | Tuesday 31 March 2026 04:43:11 +0000 (0:00:00.415) 0:08:43.726 ********* 2026-03-31 04:43:17.694756 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:17.694768 | orchestrator | 2026-03-31 04:43:17.694780 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-31 04:43:17.694791 | orchestrator | Tuesday 31 March 2026 04:43:11 +0000 (0:00:00.139) 0:08:43.866 ********* 2026-03-31 04:43:17.694802 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:17.694813 | orchestrator | 2026-03-31 04:43:17.694826 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-31 04:43:17.694844 | orchestrator | Tuesday 31 March 2026 04:43:11 +0000 (0:00:00.144) 0:08:44.011 ********* 2026-03-31 04:43:17.694863 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:17.694882 | orchestrator | 2026-03-31 04:43:17.694902 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-31 04:43:17.694915 | orchestrator | Tuesday 31 March 2026 04:43:11 +0000 (0:00:00.150) 0:08:44.161 ********* 2026-03-31 04:43:17.694986 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:17.695000 | orchestrator | 2026-03-31 04:43:17.695029 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-31 04:43:17.695042 | orchestrator | Tuesday 31 March 2026 04:43:11 +0000 (0:00:00.129) 0:08:44.291 ********* 2026-03-31 04:43:17.695054 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:17.695066 | orchestrator | 2026-03-31 04:43:17.695081 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-31 04:43:17.695101 | orchestrator | Tuesday 31 March 2026 04:43:11 +0000 (0:00:00.155) 0:08:44.447 ********* 2026-03-31 04:43:17.695119 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:17.695138 | orchestrator | 2026-03-31 04:43:17.695156 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-31 04:43:17.695197 | orchestrator | Tuesday 31 March 2026 04:43:11 +0000 (0:00:00.132) 0:08:44.579 ********* 2026-03-31 04:43:17.695217 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:17.695236 | orchestrator | 2026-03-31 04:43:17.695283 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-31 04:43:17.695302 | orchestrator | Tuesday 31 March 2026 04:43:12 +0000 (0:00:00.155) 0:08:44.735 ********* 2026-03-31 04:43:17.695320 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:17.695339 | orchestrator | 2026-03-31 04:43:17.695359 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-31 04:43:17.695377 | orchestrator | Tuesday 31 March 2026 04:43:12 +0000 (0:00:00.155) 0:08:44.890 ********* 2026-03-31 04:43:17.695396 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:17.695415 | orchestrator | 2026-03-31 04:43:17.695432 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-31 04:43:17.695451 | orchestrator | Tuesday 31 March 2026 04:43:12 +0000 (0:00:00.136) 0:08:45.027 ********* 2026-03-31 04:43:17.695471 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:17.695490 | orchestrator | 2026-03-31 04:43:17.695509 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-31 04:43:17.695528 | orchestrator | Tuesday 31 March 2026 04:43:12 +0000 (0:00:00.131) 0:08:45.158 ********* 2026-03-31 04:43:17.695540 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:17.695551 | orchestrator | 2026-03-31 04:43:17.695562 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-31 04:43:17.695573 | orchestrator | Tuesday 31 March 2026 04:43:12 +0000 (0:00:00.202) 0:08:45.360 ********* 2026-03-31 04:43:17.695584 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:17.695594 | orchestrator | 2026-03-31 04:43:17.695606 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-31 04:43:17.695641 | orchestrator | Tuesday 31 March 2026 04:43:13 +0000 (0:00:00.423) 0:08:45.784 ********* 2026-03-31 04:43:17.695652 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:17.695663 | orchestrator | 2026-03-31 04:43:17.695675 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-31 04:43:17.695687 | orchestrator | Tuesday 31 March 2026 04:43:13 +0000 (0:00:00.137) 0:08:45.922 ********* 2026-03-31 04:43:17.695705 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:17.695724 | orchestrator | 2026-03-31 04:43:17.695743 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-31 04:43:17.695755 | orchestrator | Tuesday 31 March 2026 04:43:13 +0000 (0:00:00.129) 0:08:46.051 ********* 2026-03-31 04:43:17.695766 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:17.695777 | orchestrator | 2026-03-31 04:43:17.695788 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-31 04:43:17.695799 | orchestrator | Tuesday 31 March 2026 04:43:13 +0000 (0:00:00.136) 0:08:46.187 ********* 2026-03-31 04:43:17.695810 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:17.695821 | orchestrator | 2026-03-31 04:43:17.695832 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-31 04:43:17.695843 | orchestrator | Tuesday 31 March 2026 04:43:13 +0000 (0:00:00.134) 0:08:46.322 ********* 2026-03-31 04:43:17.695853 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:17.695864 | orchestrator | 2026-03-31 04:43:17.695875 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-31 04:43:17.695886 | orchestrator | Tuesday 31 March 2026 04:43:13 +0000 (0:00:00.137) 0:08:46.459 ********* 2026-03-31 04:43:17.695897 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:17.695908 | orchestrator | 2026-03-31 04:43:17.695919 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-31 04:43:17.695930 | orchestrator | Tuesday 31 March 2026 04:43:13 +0000 (0:00:00.149) 0:08:46.608 ********* 2026-03-31 04:43:17.695940 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:17.695951 | orchestrator | 2026-03-31 04:43:17.695962 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-31 04:43:17.696001 | orchestrator | Tuesday 31 March 2026 04:43:14 +0000 (0:00:00.206) 0:08:46.815 ********* 2026-03-31 04:43:17.696022 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:17.696037 | orchestrator | 2026-03-31 04:43:17.696049 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-31 04:43:17.696060 | orchestrator | Tuesday 31 March 2026 04:43:14 +0000 (0:00:00.149) 0:08:46.965 ********* 2026-03-31 04:43:17.696071 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:17.696081 | orchestrator | 2026-03-31 04:43:17.696092 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-31 04:43:17.696108 | orchestrator | Tuesday 31 March 2026 04:43:14 +0000 (0:00:00.126) 0:08:47.091 ********* 2026-03-31 04:43:17.696123 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:17.696134 | orchestrator | 2026-03-31 04:43:17.696145 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-31 04:43:17.696156 | orchestrator | Tuesday 31 March 2026 04:43:14 +0000 (0:00:00.117) 0:08:47.209 ********* 2026-03-31 04:43:17.696166 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:17.696177 | orchestrator | 2026-03-31 04:43:17.696189 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-31 04:43:17.696199 | orchestrator | Tuesday 31 March 2026 04:43:14 +0000 (0:00:00.120) 0:08:47.330 ********* 2026-03-31 04:43:17.696210 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:17.696221 | orchestrator | 2026-03-31 04:43:17.696269 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-31 04:43:17.696282 | orchestrator | Tuesday 31 March 2026 04:43:15 +0000 (0:00:00.423) 0:08:47.753 ********* 2026-03-31 04:43:17.696293 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:17.696304 | orchestrator | 2026-03-31 04:43:17.696315 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-31 04:43:17.696337 | orchestrator | Tuesday 31 March 2026 04:43:15 +0000 (0:00:00.129) 0:08:47.882 ********* 2026-03-31 04:43:17.696349 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:17.696360 | orchestrator | 2026-03-31 04:43:17.696371 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-31 04:43:17.696402 | orchestrator | Tuesday 31 March 2026 04:43:15 +0000 (0:00:00.144) 0:08:48.027 ********* 2026-03-31 04:43:17.696414 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:17.696425 | orchestrator | 2026-03-31 04:43:17.696435 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-31 04:43:17.696446 | orchestrator | Tuesday 31 March 2026 04:43:15 +0000 (0:00:00.152) 0:08:48.179 ********* 2026-03-31 04:43:17.696458 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:17.696468 | orchestrator | 2026-03-31 04:43:17.696479 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-31 04:43:17.696491 | orchestrator | Tuesday 31 March 2026 04:43:15 +0000 (0:00:00.142) 0:08:48.321 ********* 2026-03-31 04:43:17.696502 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:17.696513 | orchestrator | 2026-03-31 04:43:17.696524 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-31 04:43:17.696535 | orchestrator | Tuesday 31 March 2026 04:43:15 +0000 (0:00:00.143) 0:08:48.465 ********* 2026-03-31 04:43:17.696545 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:17.696556 | orchestrator | 2026-03-31 04:43:17.696567 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-31 04:43:17.696578 | orchestrator | Tuesday 31 March 2026 04:43:15 +0000 (0:00:00.141) 0:08:48.606 ********* 2026-03-31 04:43:17.696589 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:17.696600 | orchestrator | 2026-03-31 04:43:17.696611 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-31 04:43:17.696622 | orchestrator | Tuesday 31 March 2026 04:43:16 +0000 (0:00:00.125) 0:08:48.732 ********* 2026-03-31 04:43:17.696633 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:17.696644 | orchestrator | 2026-03-31 04:43:17.696655 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-31 04:43:17.696665 | orchestrator | Tuesday 31 March 2026 04:43:16 +0000 (0:00:00.149) 0:08:48.881 ********* 2026-03-31 04:43:17.696676 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:17.696687 | orchestrator | 2026-03-31 04:43:17.696698 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-31 04:43:17.696709 | orchestrator | Tuesday 31 March 2026 04:43:16 +0000 (0:00:00.235) 0:08:49.117 ********* 2026-03-31 04:43:17.696720 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:17.696731 | orchestrator | 2026-03-31 04:43:17.696742 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-31 04:43:17.696753 | orchestrator | Tuesday 31 March 2026 04:43:16 +0000 (0:00:00.144) 0:08:49.262 ********* 2026-03-31 04:43:17.696764 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:17.696775 | orchestrator | 2026-03-31 04:43:17.696786 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-31 04:43:17.696797 | orchestrator | Tuesday 31 March 2026 04:43:16 +0000 (0:00:00.234) 0:08:49.496 ********* 2026-03-31 04:43:17.696808 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:17.696819 | orchestrator | 2026-03-31 04:43:17.696830 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-31 04:43:17.696841 | orchestrator | Tuesday 31 March 2026 04:43:16 +0000 (0:00:00.140) 0:08:49.636 ********* 2026-03-31 04:43:17.696852 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:17.696863 | orchestrator | 2026-03-31 04:43:17.696874 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-31 04:43:17.696886 | orchestrator | Tuesday 31 March 2026 04:43:17 +0000 (0:00:00.158) 0:08:49.795 ********* 2026-03-31 04:43:17.696904 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:17.696915 | orchestrator | 2026-03-31 04:43:17.696926 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-31 04:43:17.696937 | orchestrator | Tuesday 31 March 2026 04:43:17 +0000 (0:00:00.415) 0:08:50.211 ********* 2026-03-31 04:43:17.696948 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:17.696959 | orchestrator | 2026-03-31 04:43:17.696978 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-31 04:43:25.954517 | orchestrator | Tuesday 31 March 2026 04:43:17 +0000 (0:00:00.156) 0:08:50.368 ********* 2026-03-31 04:43:25.954632 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:25.954649 | orchestrator | 2026-03-31 04:43:25.954662 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-31 04:43:25.954674 | orchestrator | Tuesday 31 March 2026 04:43:17 +0000 (0:00:00.161) 0:08:50.529 ********* 2026-03-31 04:43:25.954686 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:25.954698 | orchestrator | 2026-03-31 04:43:25.954709 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-31 04:43:25.954720 | orchestrator | Tuesday 31 March 2026 04:43:17 +0000 (0:00:00.147) 0:08:50.676 ********* 2026-03-31 04:43:25.954732 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-31 04:43:25.954743 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-31 04:43:25.954754 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-31 04:43:25.954765 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:25.954776 | orchestrator | 2026-03-31 04:43:25.954788 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-31 04:43:25.954799 | orchestrator | Tuesday 31 March 2026 04:43:18 +0000 (0:00:00.404) 0:08:51.081 ********* 2026-03-31 04:43:25.954826 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-31 04:43:25.954837 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-31 04:43:25.954849 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-31 04:43:25.954860 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:25.954871 | orchestrator | 2026-03-31 04:43:25.954882 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-31 04:43:25.954893 | orchestrator | Tuesday 31 March 2026 04:43:18 +0000 (0:00:00.466) 0:08:51.548 ********* 2026-03-31 04:43:25.954904 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-31 04:43:25.954915 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-31 04:43:25.954926 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-31 04:43:25.954937 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:25.954948 | orchestrator | 2026-03-31 04:43:25.954959 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-31 04:43:25.954970 | orchestrator | Tuesday 31 March 2026 04:43:19 +0000 (0:00:00.449) 0:08:51.998 ********* 2026-03-31 04:43:25.954997 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:25.955009 | orchestrator | 2026-03-31 04:43:25.955030 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-31 04:43:25.955043 | orchestrator | Tuesday 31 March 2026 04:43:19 +0000 (0:00:00.158) 0:08:52.157 ********* 2026-03-31 04:43:25.955056 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-31 04:43:25.955069 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:25.955081 | orchestrator | 2026-03-31 04:43:25.955094 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-31 04:43:25.955107 | orchestrator | Tuesday 31 March 2026 04:43:19 +0000 (0:00:00.336) 0:08:52.494 ********* 2026-03-31 04:43:25.955119 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:25.955132 | orchestrator | 2026-03-31 04:43:25.955144 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-31 04:43:25.955158 | orchestrator | Tuesday 31 March 2026 04:43:20 +0000 (0:00:00.220) 0:08:52.714 ********* 2026-03-31 04:43:25.955194 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-31 04:43:25.955207 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-31 04:43:25.955219 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-31 04:43:25.955231 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:25.955243 | orchestrator | 2026-03-31 04:43:25.955275 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-31 04:43:25.955288 | orchestrator | Tuesday 31 March 2026 04:43:20 +0000 (0:00:00.783) 0:08:53.497 ********* 2026-03-31 04:43:25.955301 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:25.955313 | orchestrator | 2026-03-31 04:43:25.955326 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-31 04:43:25.955338 | orchestrator | Tuesday 31 March 2026 04:43:21 +0000 (0:00:00.399) 0:08:53.897 ********* 2026-03-31 04:43:25.955353 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:25.955372 | orchestrator | 2026-03-31 04:43:25.955391 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-31 04:43:25.955410 | orchestrator | Tuesday 31 March 2026 04:43:21 +0000 (0:00:00.145) 0:08:54.042 ********* 2026-03-31 04:43:25.955440 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:25.955459 | orchestrator | 2026-03-31 04:43:25.955476 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-31 04:43:25.955493 | orchestrator | Tuesday 31 March 2026 04:43:21 +0000 (0:00:00.153) 0:08:54.195 ********* 2026-03-31 04:43:25.955511 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:43:25.955529 | orchestrator | 2026-03-31 04:43:25.955547 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-03-31 04:43:25.955566 | orchestrator | 2026-03-31 04:43:25.955584 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-03-31 04:43:25.955603 | orchestrator | Tuesday 31 March 2026 04:43:21 +0000 (0:00:00.211) 0:08:54.407 ********* 2026-03-31 04:43:25.955621 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:25.955641 | orchestrator | 2026-03-31 04:43:25.955653 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-31 04:43:25.955664 | orchestrator | Tuesday 31 March 2026 04:43:21 +0000 (0:00:00.228) 0:08:54.635 ********* 2026-03-31 04:43:25.955675 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:25.955686 | orchestrator | 2026-03-31 04:43:25.955696 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-31 04:43:25.955707 | orchestrator | Tuesday 31 March 2026 04:43:22 +0000 (0:00:00.209) 0:08:54.845 ********* 2026-03-31 04:43:25.955718 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:25.955729 | orchestrator | 2026-03-31 04:43:25.955758 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-31 04:43:25.955770 | orchestrator | Tuesday 31 March 2026 04:43:22 +0000 (0:00:00.140) 0:08:54.985 ********* 2026-03-31 04:43:25.955781 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:25.955792 | orchestrator | 2026-03-31 04:43:25.955803 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-31 04:43:25.955813 | orchestrator | Tuesday 31 March 2026 04:43:22 +0000 (0:00:00.144) 0:08:55.129 ********* 2026-03-31 04:43:25.955824 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:25.955835 | orchestrator | 2026-03-31 04:43:25.955846 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-31 04:43:25.955857 | orchestrator | Tuesday 31 March 2026 04:43:22 +0000 (0:00:00.144) 0:08:55.273 ********* 2026-03-31 04:43:25.955867 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:25.955878 | orchestrator | 2026-03-31 04:43:25.955889 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-31 04:43:25.955900 | orchestrator | Tuesday 31 March 2026 04:43:22 +0000 (0:00:00.132) 0:08:55.406 ********* 2026-03-31 04:43:25.955911 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:25.955922 | orchestrator | 2026-03-31 04:43:25.955932 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-31 04:43:25.955955 | orchestrator | Tuesday 31 March 2026 04:43:23 +0000 (0:00:00.455) 0:08:55.862 ********* 2026-03-31 04:43:25.955974 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:25.955985 | orchestrator | 2026-03-31 04:43:25.955996 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-31 04:43:25.956007 | orchestrator | Tuesday 31 March 2026 04:43:23 +0000 (0:00:00.167) 0:08:56.029 ********* 2026-03-31 04:43:25.956018 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:25.956029 | orchestrator | 2026-03-31 04:43:25.956040 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-31 04:43:25.956051 | orchestrator | Tuesday 31 March 2026 04:43:23 +0000 (0:00:00.156) 0:08:56.185 ********* 2026-03-31 04:43:25.956062 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:25.956073 | orchestrator | 2026-03-31 04:43:25.956083 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-31 04:43:25.956094 | orchestrator | Tuesday 31 March 2026 04:43:23 +0000 (0:00:00.174) 0:08:56.360 ********* 2026-03-31 04:43:25.956105 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:25.956116 | orchestrator | 2026-03-31 04:43:25.956127 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-31 04:43:25.956138 | orchestrator | Tuesday 31 March 2026 04:43:23 +0000 (0:00:00.136) 0:08:56.497 ********* 2026-03-31 04:43:25.956148 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:25.956159 | orchestrator | 2026-03-31 04:43:25.956170 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-31 04:43:25.956181 | orchestrator | Tuesday 31 March 2026 04:43:24 +0000 (0:00:00.215) 0:08:56.712 ********* 2026-03-31 04:43:25.956192 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:25.956203 | orchestrator | 2026-03-31 04:43:25.956214 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-31 04:43:25.956225 | orchestrator | Tuesday 31 March 2026 04:43:24 +0000 (0:00:00.128) 0:08:56.840 ********* 2026-03-31 04:43:25.956235 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:25.956247 | orchestrator | 2026-03-31 04:43:25.956304 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-31 04:43:25.956315 | orchestrator | Tuesday 31 March 2026 04:43:24 +0000 (0:00:00.137) 0:08:56.978 ********* 2026-03-31 04:43:25.956326 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:25.956337 | orchestrator | 2026-03-31 04:43:25.956348 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-31 04:43:25.956359 | orchestrator | Tuesday 31 March 2026 04:43:24 +0000 (0:00:00.137) 0:08:57.116 ********* 2026-03-31 04:43:25.956370 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:25.956381 | orchestrator | 2026-03-31 04:43:25.956392 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-31 04:43:25.956403 | orchestrator | Tuesday 31 March 2026 04:43:24 +0000 (0:00:00.137) 0:08:57.253 ********* 2026-03-31 04:43:25.956414 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:25.956425 | orchestrator | 2026-03-31 04:43:25.956436 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-31 04:43:25.956446 | orchestrator | Tuesday 31 March 2026 04:43:24 +0000 (0:00:00.140) 0:08:57.394 ********* 2026-03-31 04:43:25.956457 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:25.956468 | orchestrator | 2026-03-31 04:43:25.956479 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-31 04:43:25.956490 | orchestrator | Tuesday 31 March 2026 04:43:24 +0000 (0:00:00.145) 0:08:57.540 ********* 2026-03-31 04:43:25.956501 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:25.956512 | orchestrator | 2026-03-31 04:43:25.956523 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-31 04:43:25.956535 | orchestrator | Tuesday 31 March 2026 04:43:25 +0000 (0:00:00.145) 0:08:57.686 ********* 2026-03-31 04:43:25.956546 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:25.956557 | orchestrator | 2026-03-31 04:43:25.956568 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-31 04:43:25.956586 | orchestrator | Tuesday 31 March 2026 04:43:25 +0000 (0:00:00.506) 0:08:58.192 ********* 2026-03-31 04:43:25.956597 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:25.956608 | orchestrator | 2026-03-31 04:43:25.956619 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-31 04:43:25.956630 | orchestrator | Tuesday 31 March 2026 04:43:25 +0000 (0:00:00.141) 0:08:58.333 ********* 2026-03-31 04:43:25.956641 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:25.956652 | orchestrator | 2026-03-31 04:43:25.956662 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-31 04:43:25.956673 | orchestrator | Tuesday 31 March 2026 04:43:25 +0000 (0:00:00.160) 0:08:58.494 ********* 2026-03-31 04:43:25.956684 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:25.956695 | orchestrator | 2026-03-31 04:43:25.956706 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-31 04:43:25.956724 | orchestrator | Tuesday 31 March 2026 04:43:25 +0000 (0:00:00.130) 0:08:58.625 ********* 2026-03-31 04:43:34.368030 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:34.368145 | orchestrator | 2026-03-31 04:43:34.368163 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-31 04:43:34.368176 | orchestrator | Tuesday 31 March 2026 04:43:26 +0000 (0:00:00.203) 0:08:58.828 ********* 2026-03-31 04:43:34.368188 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:34.368199 | orchestrator | 2026-03-31 04:43:34.368210 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-31 04:43:34.368221 | orchestrator | Tuesday 31 March 2026 04:43:26 +0000 (0:00:00.125) 0:08:58.953 ********* 2026-03-31 04:43:34.368232 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:34.368243 | orchestrator | 2026-03-31 04:43:34.368255 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-31 04:43:34.368265 | orchestrator | Tuesday 31 March 2026 04:43:26 +0000 (0:00:00.135) 0:08:59.088 ********* 2026-03-31 04:43:34.368366 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:34.368384 | orchestrator | 2026-03-31 04:43:34.368396 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-31 04:43:34.368407 | orchestrator | Tuesday 31 March 2026 04:43:26 +0000 (0:00:00.139) 0:08:59.228 ********* 2026-03-31 04:43:34.368418 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:34.368429 | orchestrator | 2026-03-31 04:43:34.368457 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-31 04:43:34.368469 | orchestrator | Tuesday 31 March 2026 04:43:26 +0000 (0:00:00.141) 0:08:59.370 ********* 2026-03-31 04:43:34.368480 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:34.368491 | orchestrator | 2026-03-31 04:43:34.368502 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-31 04:43:34.368513 | orchestrator | Tuesday 31 March 2026 04:43:26 +0000 (0:00:00.130) 0:08:59.500 ********* 2026-03-31 04:43:34.368524 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:34.368535 | orchestrator | 2026-03-31 04:43:34.368547 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-31 04:43:34.368558 | orchestrator | Tuesday 31 March 2026 04:43:26 +0000 (0:00:00.135) 0:08:59.635 ********* 2026-03-31 04:43:34.368571 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:34.368584 | orchestrator | 2026-03-31 04:43:34.368596 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-31 04:43:34.368609 | orchestrator | Tuesday 31 March 2026 04:43:27 +0000 (0:00:00.133) 0:08:59.769 ********* 2026-03-31 04:43:34.368621 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:34.368635 | orchestrator | 2026-03-31 04:43:34.368647 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-31 04:43:34.368659 | orchestrator | Tuesday 31 March 2026 04:43:27 +0000 (0:00:00.505) 0:09:00.275 ********* 2026-03-31 04:43:34.368672 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:34.368685 | orchestrator | 2026-03-31 04:43:34.368698 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-31 04:43:34.368732 | orchestrator | Tuesday 31 March 2026 04:43:27 +0000 (0:00:00.165) 0:09:00.440 ********* 2026-03-31 04:43:34.368743 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:34.368755 | orchestrator | 2026-03-31 04:43:34.368766 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-31 04:43:34.368776 | orchestrator | Tuesday 31 March 2026 04:43:27 +0000 (0:00:00.153) 0:09:00.594 ********* 2026-03-31 04:43:34.368787 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:34.368799 | orchestrator | 2026-03-31 04:43:34.368810 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-31 04:43:34.368821 | orchestrator | Tuesday 31 March 2026 04:43:28 +0000 (0:00:00.150) 0:09:00.744 ********* 2026-03-31 04:43:34.368832 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:34.368843 | orchestrator | 2026-03-31 04:43:34.368854 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-31 04:43:34.368865 | orchestrator | Tuesday 31 March 2026 04:43:28 +0000 (0:00:00.133) 0:09:00.878 ********* 2026-03-31 04:43:34.368875 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:34.368886 | orchestrator | 2026-03-31 04:43:34.368897 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-31 04:43:34.368908 | orchestrator | Tuesday 31 March 2026 04:43:28 +0000 (0:00:00.144) 0:09:01.022 ********* 2026-03-31 04:43:34.368919 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:34.368930 | orchestrator | 2026-03-31 04:43:34.368941 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-31 04:43:34.368952 | orchestrator | Tuesday 31 March 2026 04:43:28 +0000 (0:00:00.146) 0:09:01.168 ********* 2026-03-31 04:43:34.368963 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:34.368973 | orchestrator | 2026-03-31 04:43:34.368984 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-31 04:43:34.368996 | orchestrator | Tuesday 31 March 2026 04:43:28 +0000 (0:00:00.142) 0:09:01.311 ********* 2026-03-31 04:43:34.369007 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:34.369018 | orchestrator | 2026-03-31 04:43:34.369029 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-31 04:43:34.369040 | orchestrator | Tuesday 31 March 2026 04:43:28 +0000 (0:00:00.134) 0:09:01.446 ********* 2026-03-31 04:43:34.369050 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:34.369061 | orchestrator | 2026-03-31 04:43:34.369072 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-31 04:43:34.369083 | orchestrator | Tuesday 31 March 2026 04:43:28 +0000 (0:00:00.125) 0:09:01.571 ********* 2026-03-31 04:43:34.369094 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:34.369105 | orchestrator | 2026-03-31 04:43:34.369116 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-31 04:43:34.369127 | orchestrator | Tuesday 31 March 2026 04:43:29 +0000 (0:00:00.122) 0:09:01.693 ********* 2026-03-31 04:43:34.369138 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:34.369149 | orchestrator | 2026-03-31 04:43:34.369160 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-31 04:43:34.369191 | orchestrator | Tuesday 31 March 2026 04:43:29 +0000 (0:00:00.136) 0:09:01.830 ********* 2026-03-31 04:43:34.369202 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:34.369213 | orchestrator | 2026-03-31 04:43:34.369224 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-31 04:43:34.369235 | orchestrator | Tuesday 31 March 2026 04:43:29 +0000 (0:00:00.432) 0:09:02.263 ********* 2026-03-31 04:43:34.369246 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:34.369256 | orchestrator | 2026-03-31 04:43:34.369267 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-31 04:43:34.369296 | orchestrator | Tuesday 31 March 2026 04:43:29 +0000 (0:00:00.128) 0:09:02.391 ********* 2026-03-31 04:43:34.369316 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:34.369327 | orchestrator | 2026-03-31 04:43:34.369338 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-31 04:43:34.369349 | orchestrator | Tuesday 31 March 2026 04:43:29 +0000 (0:00:00.208) 0:09:02.600 ********* 2026-03-31 04:43:34.369359 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:34.369370 | orchestrator | 2026-03-31 04:43:34.369381 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-31 04:43:34.369392 | orchestrator | Tuesday 31 March 2026 04:43:30 +0000 (0:00:00.153) 0:09:02.754 ********* 2026-03-31 04:43:34.369402 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:34.369413 | orchestrator | 2026-03-31 04:43:34.369430 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-31 04:43:34.369441 | orchestrator | Tuesday 31 March 2026 04:43:30 +0000 (0:00:00.248) 0:09:03.002 ********* 2026-03-31 04:43:34.369452 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:34.369463 | orchestrator | 2026-03-31 04:43:34.369474 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-31 04:43:34.369484 | orchestrator | Tuesday 31 March 2026 04:43:30 +0000 (0:00:00.143) 0:09:03.145 ********* 2026-03-31 04:43:34.369495 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:34.369506 | orchestrator | 2026-03-31 04:43:34.369517 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-31 04:43:34.369529 | orchestrator | Tuesday 31 March 2026 04:43:30 +0000 (0:00:00.141) 0:09:03.287 ********* 2026-03-31 04:43:34.369540 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:34.369551 | orchestrator | 2026-03-31 04:43:34.369561 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-31 04:43:34.369572 | orchestrator | Tuesday 31 March 2026 04:43:30 +0000 (0:00:00.139) 0:09:03.426 ********* 2026-03-31 04:43:34.369583 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:34.369594 | orchestrator | 2026-03-31 04:43:34.369604 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-31 04:43:34.369615 | orchestrator | Tuesday 31 March 2026 04:43:30 +0000 (0:00:00.140) 0:09:03.566 ********* 2026-03-31 04:43:34.369626 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:34.369636 | orchestrator | 2026-03-31 04:43:34.369647 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-31 04:43:34.369658 | orchestrator | Tuesday 31 March 2026 04:43:31 +0000 (0:00:00.140) 0:09:03.706 ********* 2026-03-31 04:43:34.369668 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:34.369679 | orchestrator | 2026-03-31 04:43:34.369690 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-31 04:43:34.369701 | orchestrator | Tuesday 31 March 2026 04:43:31 +0000 (0:00:00.136) 0:09:03.843 ********* 2026-03-31 04:43:34.369711 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-31 04:43:34.369722 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-31 04:43:34.369733 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-31 04:43:34.369744 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:34.369755 | orchestrator | 2026-03-31 04:43:34.369765 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-31 04:43:34.369776 | orchestrator | Tuesday 31 March 2026 04:43:31 +0000 (0:00:00.769) 0:09:04.612 ********* 2026-03-31 04:43:34.369787 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-31 04:43:34.369798 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-31 04:43:34.369809 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-31 04:43:34.369819 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:34.369830 | orchestrator | 2026-03-31 04:43:34.369841 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-31 04:43:34.369851 | orchestrator | Tuesday 31 March 2026 04:43:32 +0000 (0:00:00.708) 0:09:05.321 ********* 2026-03-31 04:43:34.369869 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-31 04:43:34.369880 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-31 04:43:34.369891 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-31 04:43:34.369901 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:34.369912 | orchestrator | 2026-03-31 04:43:34.369923 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-31 04:43:34.369934 | orchestrator | Tuesday 31 March 2026 04:43:33 +0000 (0:00:01.009) 0:09:06.330 ********* 2026-03-31 04:43:34.369944 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:34.369955 | orchestrator | 2026-03-31 04:43:34.369966 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-31 04:43:34.369977 | orchestrator | Tuesday 31 March 2026 04:43:33 +0000 (0:00:00.152) 0:09:06.482 ********* 2026-03-31 04:43:34.369988 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-31 04:43:34.369998 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:34.370009 | orchestrator | 2026-03-31 04:43:34.370083 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-31 04:43:34.370097 | orchestrator | Tuesday 31 March 2026 04:43:34 +0000 (0:00:00.336) 0:09:06.819 ********* 2026-03-31 04:43:34.370108 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:34.370119 | orchestrator | 2026-03-31 04:43:34.370130 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-31 04:43:34.370149 | orchestrator | Tuesday 31 March 2026 04:43:34 +0000 (0:00:00.221) 0:09:07.040 ********* 2026-03-31 04:43:56.265093 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-31 04:43:56.265202 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-31 04:43:56.265214 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-31 04:43:56.265222 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:56.265230 | orchestrator | 2026-03-31 04:43:56.265238 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-31 04:43:56.265246 | orchestrator | Tuesday 31 March 2026 04:43:34 +0000 (0:00:00.405) 0:09:07.446 ********* 2026-03-31 04:43:56.265253 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:56.265260 | orchestrator | 2026-03-31 04:43:56.265268 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-31 04:43:56.265275 | orchestrator | Tuesday 31 March 2026 04:43:34 +0000 (0:00:00.127) 0:09:07.573 ********* 2026-03-31 04:43:56.265281 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:56.265288 | orchestrator | 2026-03-31 04:43:56.265295 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-31 04:43:56.265302 | orchestrator | Tuesday 31 March 2026 04:43:35 +0000 (0:00:00.151) 0:09:07.725 ********* 2026-03-31 04:43:56.265309 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:56.265316 | orchestrator | 2026-03-31 04:43:56.265373 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-31 04:43:56.265388 | orchestrator | Tuesday 31 March 2026 04:43:35 +0000 (0:00:00.141) 0:09:07.866 ********* 2026-03-31 04:43:56.265399 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:43:56.265410 | orchestrator | 2026-03-31 04:43:56.265418 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-03-31 04:43:56.265424 | orchestrator | 2026-03-31 04:43:56.265432 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-03-31 04:43:56.265439 | orchestrator | Tuesday 31 March 2026 04:43:35 +0000 (0:00:00.228) 0:09:08.095 ********* 2026-03-31 04:43:56.265446 | orchestrator | changed: [testbed-node-0] 2026-03-31 04:43:56.265453 | orchestrator | 2026-03-31 04:43:56.265460 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-03-31 04:43:56.265466 | orchestrator | Tuesday 31 March 2026 04:43:47 +0000 (0:00:11.844) 0:09:19.940 ********* 2026-03-31 04:43:56.265473 | orchestrator | changed: [testbed-node-0] 2026-03-31 04:43:56.265480 | orchestrator | 2026-03-31 04:43:56.265487 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-31 04:43:56.265512 | orchestrator | Tuesday 31 March 2026 04:43:49 +0000 (0:00:01.769) 0:09:21.709 ********* 2026-03-31 04:43:56.265519 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-03-31 04:43:56.265526 | orchestrator | 2026-03-31 04:43:56.265532 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-31 04:43:56.265539 | orchestrator | Tuesday 31 March 2026 04:43:49 +0000 (0:00:00.251) 0:09:21.961 ********* 2026-03-31 04:43:56.265546 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:43:56.265554 | orchestrator | 2026-03-31 04:43:56.265563 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-31 04:43:56.265570 | orchestrator | Tuesday 31 March 2026 04:43:49 +0000 (0:00:00.450) 0:09:22.412 ********* 2026-03-31 04:43:56.265582 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:43:56.265593 | orchestrator | 2026-03-31 04:43:56.265604 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-31 04:43:56.265614 | orchestrator | Tuesday 31 March 2026 04:43:49 +0000 (0:00:00.136) 0:09:22.549 ********* 2026-03-31 04:43:56.265625 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:43:56.265636 | orchestrator | 2026-03-31 04:43:56.265648 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-31 04:43:56.265658 | orchestrator | Tuesday 31 March 2026 04:43:50 +0000 (0:00:00.527) 0:09:23.076 ********* 2026-03-31 04:43:56.265670 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:43:56.265683 | orchestrator | 2026-03-31 04:43:56.265695 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-31 04:43:56.265708 | orchestrator | Tuesday 31 March 2026 04:43:50 +0000 (0:00:00.147) 0:09:23.224 ********* 2026-03-31 04:43:56.265719 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:43:56.265732 | orchestrator | 2026-03-31 04:43:56.265744 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-31 04:43:56.265755 | orchestrator | Tuesday 31 March 2026 04:43:50 +0000 (0:00:00.148) 0:09:23.373 ********* 2026-03-31 04:43:56.265766 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:43:56.265776 | orchestrator | 2026-03-31 04:43:56.265789 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-31 04:43:56.265802 | orchestrator | Tuesday 31 March 2026 04:43:50 +0000 (0:00:00.162) 0:09:23.535 ********* 2026-03-31 04:43:56.265813 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:56.265824 | orchestrator | 2026-03-31 04:43:56.265834 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-31 04:43:56.265842 | orchestrator | Tuesday 31 March 2026 04:43:51 +0000 (0:00:00.151) 0:09:23.687 ********* 2026-03-31 04:43:56.265849 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:43:56.265857 | orchestrator | 2026-03-31 04:43:56.265865 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-31 04:43:56.265874 | orchestrator | Tuesday 31 March 2026 04:43:51 +0000 (0:00:00.139) 0:09:23.826 ********* 2026-03-31 04:43:56.265887 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-31 04:43:56.265898 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:43:56.265910 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:43:56.265920 | orchestrator | 2026-03-31 04:43:56.265930 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-31 04:43:56.265941 | orchestrator | Tuesday 31 March 2026 04:43:52 +0000 (0:00:00.999) 0:09:24.825 ********* 2026-03-31 04:43:56.265952 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:43:56.265964 | orchestrator | 2026-03-31 04:43:56.265975 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-31 04:43:56.265986 | orchestrator | Tuesday 31 March 2026 04:43:52 +0000 (0:00:00.238) 0:09:25.064 ********* 2026-03-31 04:43:56.266060 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-31 04:43:56.266071 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:43:56.266087 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:43:56.266094 | orchestrator | 2026-03-31 04:43:56.266101 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-31 04:43:56.266108 | orchestrator | Tuesday 31 March 2026 04:43:54 +0000 (0:00:02.393) 0:09:27.457 ********* 2026-03-31 04:43:56.266115 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-31 04:43:56.266121 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-31 04:43:56.266128 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-31 04:43:56.266135 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:56.266142 | orchestrator | 2026-03-31 04:43:56.266148 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-31 04:43:56.266155 | orchestrator | Tuesday 31 March 2026 04:43:55 +0000 (0:00:00.452) 0:09:27.910 ********* 2026-03-31 04:43:56.266169 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-31 04:43:56.266180 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-31 04:43:56.266187 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-31 04:43:56.266194 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:56.266201 | orchestrator | 2026-03-31 04:43:56.266208 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-31 04:43:56.266215 | orchestrator | Tuesday 31 March 2026 04:43:55 +0000 (0:00:00.648) 0:09:28.558 ********* 2026-03-31 04:43:56.266224 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 04:43:56.266235 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 04:43:56.266242 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 04:43:56.266249 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:56.266256 | orchestrator | 2026-03-31 04:43:56.266263 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-31 04:43:56.266270 | orchestrator | Tuesday 31 March 2026 04:43:56 +0000 (0:00:00.160) 0:09:28.718 ********* 2026-03-31 04:43:56.266278 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '2a470704af4f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-31 04:43:53.227883', 'end': '2026-03-31 04:43:53.271793', 'delta': '0:00:00.043910', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2a470704af4f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-31 04:43:56.266300 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '72281537ffe8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-31 04:43:53.789145', 'end': '2026-03-31 04:43:53.835636', 'delta': '0:00:00.046491', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['72281537ffe8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-31 04:43:59.986683 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '4f3969f3506a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-31 04:43:54.342538', 'end': '2026-03-31 04:43:54.381765', 'delta': '0:00:00.039227', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['4f3969f3506a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-31 04:43:59.986793 | orchestrator | 2026-03-31 04:43:59.986812 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-31 04:43:59.986826 | orchestrator | Tuesday 31 March 2026 04:43:56 +0000 (0:00:00.213) 0:09:28.932 ********* 2026-03-31 04:43:59.986837 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:43:59.986849 | orchestrator | 2026-03-31 04:43:59.986861 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-31 04:43:59.986872 | orchestrator | Tuesday 31 March 2026 04:43:56 +0000 (0:00:00.283) 0:09:29.216 ********* 2026-03-31 04:43:59.986883 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:59.986895 | orchestrator | 2026-03-31 04:43:59.986906 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-31 04:43:59.986917 | orchestrator | Tuesday 31 March 2026 04:43:56 +0000 (0:00:00.262) 0:09:29.479 ********* 2026-03-31 04:43:59.986928 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:43:59.986939 | orchestrator | 2026-03-31 04:43:59.986950 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-31 04:43:59.986961 | orchestrator | Tuesday 31 March 2026 04:43:56 +0000 (0:00:00.155) 0:09:29.634 ********* 2026-03-31 04:43:59.986972 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:43:59.986983 | orchestrator | 2026-03-31 04:43:59.986994 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-31 04:43:59.987005 | orchestrator | Tuesday 31 March 2026 04:43:57 +0000 (0:00:00.982) 0:09:30.617 ********* 2026-03-31 04:43:59.987016 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:43:59.987027 | orchestrator | 2026-03-31 04:43:59.987038 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-31 04:43:59.987049 | orchestrator | Tuesday 31 March 2026 04:43:58 +0000 (0:00:00.159) 0:09:30.777 ********* 2026-03-31 04:43:59.987060 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:59.987071 | orchestrator | 2026-03-31 04:43:59.987082 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-31 04:43:59.987093 | orchestrator | Tuesday 31 March 2026 04:43:58 +0000 (0:00:00.133) 0:09:30.911 ********* 2026-03-31 04:43:59.987129 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:59.987141 | orchestrator | 2026-03-31 04:43:59.987153 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-31 04:43:59.987168 | orchestrator | Tuesday 31 March 2026 04:43:58 +0000 (0:00:00.238) 0:09:31.150 ********* 2026-03-31 04:43:59.987180 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:59.987192 | orchestrator | 2026-03-31 04:43:59.987205 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-31 04:43:59.987217 | orchestrator | Tuesday 31 March 2026 04:43:58 +0000 (0:00:00.135) 0:09:31.285 ********* 2026-03-31 04:43:59.987230 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:59.987243 | orchestrator | 2026-03-31 04:43:59.987255 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-31 04:43:59.987268 | orchestrator | Tuesday 31 March 2026 04:43:59 +0000 (0:00:00.432) 0:09:31.718 ********* 2026-03-31 04:43:59.987280 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:59.987293 | orchestrator | 2026-03-31 04:43:59.987305 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-31 04:43:59.987318 | orchestrator | Tuesday 31 March 2026 04:43:59 +0000 (0:00:00.142) 0:09:31.860 ********* 2026-03-31 04:43:59.987330 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:59.987370 | orchestrator | 2026-03-31 04:43:59.987392 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-31 04:43:59.987409 | orchestrator | Tuesday 31 March 2026 04:43:59 +0000 (0:00:00.140) 0:09:32.001 ********* 2026-03-31 04:43:59.987432 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:59.987458 | orchestrator | 2026-03-31 04:43:59.987477 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-31 04:43:59.987497 | orchestrator | Tuesday 31 March 2026 04:43:59 +0000 (0:00:00.143) 0:09:32.144 ********* 2026-03-31 04:43:59.987516 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:59.987535 | orchestrator | 2026-03-31 04:43:59.987553 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-31 04:43:59.987572 | orchestrator | Tuesday 31 March 2026 04:43:59 +0000 (0:00:00.136) 0:09:32.281 ********* 2026-03-31 04:43:59.987591 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:43:59.987608 | orchestrator | 2026-03-31 04:43:59.987628 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-31 04:43:59.987648 | orchestrator | Tuesday 31 March 2026 04:43:59 +0000 (0:00:00.139) 0:09:32.420 ********* 2026-03-31 04:43:59.987694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:43:59.987729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:43:59.987751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:43:59.987772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-31-01-38-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-31 04:43:59.987811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:43:59.987833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:43:59.987849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:43:59.987885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '61782125', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part16', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part14', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part15', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part1', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-31 04:44:00.240719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:44:00.240848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:44:00.240865 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:00.240878 | orchestrator | 2026-03-31 04:44:00.240891 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-31 04:44:00.240903 | orchestrator | Tuesday 31 March 2026 04:43:59 +0000 (0:00:00.237) 0:09:32.658 ********* 2026-03-31 04:44:00.240918 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:44:00.240931 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:44:00.240943 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:44:00.240968 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-31-01-38-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:44:00.241001 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:44:00.241021 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:44:00.241033 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:44:00.241103 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '61782125', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part16', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part14', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part15', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part1', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:44:00.241136 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:44:10.860524 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:44:10.860643 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:10.860661 | orchestrator | 2026-03-31 04:44:10.860674 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-31 04:44:10.860687 | orchestrator | Tuesday 31 March 2026 04:44:00 +0000 (0:00:00.247) 0:09:32.906 ********* 2026-03-31 04:44:10.860699 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:44:10.860711 | orchestrator | 2026-03-31 04:44:10.860722 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-31 04:44:10.860734 | orchestrator | Tuesday 31 March 2026 04:44:00 +0000 (0:00:00.535) 0:09:33.441 ********* 2026-03-31 04:44:10.860745 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:44:10.860756 | orchestrator | 2026-03-31 04:44:10.860768 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-31 04:44:10.860779 | orchestrator | Tuesday 31 March 2026 04:44:00 +0000 (0:00:00.137) 0:09:33.579 ********* 2026-03-31 04:44:10.860790 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:44:10.860802 | orchestrator | 2026-03-31 04:44:10.860813 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-31 04:44:10.860824 | orchestrator | Tuesday 31 March 2026 04:44:01 +0000 (0:00:00.481) 0:09:34.060 ********* 2026-03-31 04:44:10.860836 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:10.860847 | orchestrator | 2026-03-31 04:44:10.860858 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-31 04:44:10.860870 | orchestrator | Tuesday 31 March 2026 04:44:01 +0000 (0:00:00.140) 0:09:34.201 ********* 2026-03-31 04:44:10.860881 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:10.860892 | orchestrator | 2026-03-31 04:44:10.860903 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-31 04:44:10.860915 | orchestrator | Tuesday 31 March 2026 04:44:01 +0000 (0:00:00.238) 0:09:34.440 ********* 2026-03-31 04:44:10.860926 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:10.860937 | orchestrator | 2026-03-31 04:44:10.860948 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-31 04:44:10.860962 | orchestrator | Tuesday 31 March 2026 04:44:02 +0000 (0:00:00.449) 0:09:34.890 ********* 2026-03-31 04:44:10.860982 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-31 04:44:10.861003 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-31 04:44:10.861022 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-31 04:44:10.861042 | orchestrator | 2026-03-31 04:44:10.861062 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-31 04:44:10.861081 | orchestrator | Tuesday 31 March 2026 04:44:02 +0000 (0:00:00.711) 0:09:35.601 ********* 2026-03-31 04:44:10.861101 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-31 04:44:10.861152 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-31 04:44:10.861174 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-31 04:44:10.861196 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:10.861216 | orchestrator | 2026-03-31 04:44:10.861230 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-31 04:44:10.861241 | orchestrator | Tuesday 31 March 2026 04:44:03 +0000 (0:00:00.181) 0:09:35.783 ********* 2026-03-31 04:44:10.861252 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:10.861263 | orchestrator | 2026-03-31 04:44:10.861273 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-31 04:44:10.861284 | orchestrator | Tuesday 31 March 2026 04:44:03 +0000 (0:00:00.128) 0:09:35.912 ********* 2026-03-31 04:44:10.861295 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-31 04:44:10.861306 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:44:10.861324 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:44:10.861343 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-31 04:44:10.861407 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-31 04:44:10.861428 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-31 04:44:10.861446 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-31 04:44:10.861464 | orchestrator | 2026-03-31 04:44:10.861483 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-31 04:44:10.861503 | orchestrator | Tuesday 31 March 2026 04:44:04 +0000 (0:00:00.817) 0:09:36.729 ********* 2026-03-31 04:44:10.861521 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-31 04:44:10.861539 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:44:10.861551 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:44:10.861561 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-31 04:44:10.861592 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-31 04:44:10.861604 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-31 04:44:10.861615 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-31 04:44:10.861626 | orchestrator | 2026-03-31 04:44:10.861637 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-31 04:44:10.861648 | orchestrator | Tuesday 31 March 2026 04:44:05 +0000 (0:00:01.698) 0:09:38.428 ********* 2026-03-31 04:44:10.861659 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-03-31 04:44:10.861671 | orchestrator | 2026-03-31 04:44:10.861682 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-31 04:44:10.861693 | orchestrator | Tuesday 31 March 2026 04:44:05 +0000 (0:00:00.215) 0:09:38.643 ********* 2026-03-31 04:44:10.861703 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-03-31 04:44:10.861715 | orchestrator | 2026-03-31 04:44:10.861725 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-31 04:44:10.861736 | orchestrator | Tuesday 31 March 2026 04:44:06 +0000 (0:00:00.226) 0:09:38.869 ********* 2026-03-31 04:44:10.861747 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:44:10.861758 | orchestrator | 2026-03-31 04:44:10.861769 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-31 04:44:10.861780 | orchestrator | Tuesday 31 March 2026 04:44:06 +0000 (0:00:00.554) 0:09:39.424 ********* 2026-03-31 04:44:10.861791 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:10.861802 | orchestrator | 2026-03-31 04:44:10.861829 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-31 04:44:10.861847 | orchestrator | Tuesday 31 March 2026 04:44:06 +0000 (0:00:00.140) 0:09:39.565 ********* 2026-03-31 04:44:10.861866 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:10.861884 | orchestrator | 2026-03-31 04:44:10.861903 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-31 04:44:10.861922 | orchestrator | Tuesday 31 March 2026 04:44:07 +0000 (0:00:00.410) 0:09:39.975 ********* 2026-03-31 04:44:10.861942 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:10.861954 | orchestrator | 2026-03-31 04:44:10.861965 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-31 04:44:10.861976 | orchestrator | Tuesday 31 March 2026 04:44:07 +0000 (0:00:00.146) 0:09:40.122 ********* 2026-03-31 04:44:10.861987 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:44:10.861998 | orchestrator | 2026-03-31 04:44:10.862008 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-31 04:44:10.862108 | orchestrator | Tuesday 31 March 2026 04:44:07 +0000 (0:00:00.558) 0:09:40.680 ********* 2026-03-31 04:44:10.862120 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:10.862131 | orchestrator | 2026-03-31 04:44:10.862142 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-31 04:44:10.862153 | orchestrator | Tuesday 31 March 2026 04:44:08 +0000 (0:00:00.165) 0:09:40.845 ********* 2026-03-31 04:44:10.862164 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:10.862175 | orchestrator | 2026-03-31 04:44:10.862185 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-31 04:44:10.862196 | orchestrator | Tuesday 31 March 2026 04:44:08 +0000 (0:00:00.135) 0:09:40.981 ********* 2026-03-31 04:44:10.862207 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:44:10.862218 | orchestrator | 2026-03-31 04:44:10.862228 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-31 04:44:10.862239 | orchestrator | Tuesday 31 March 2026 04:44:08 +0000 (0:00:00.532) 0:09:41.513 ********* 2026-03-31 04:44:10.862250 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:44:10.862261 | orchestrator | 2026-03-31 04:44:10.862271 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-31 04:44:10.862282 | orchestrator | Tuesday 31 March 2026 04:44:09 +0000 (0:00:00.547) 0:09:42.061 ********* 2026-03-31 04:44:10.862293 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:10.862304 | orchestrator | 2026-03-31 04:44:10.862314 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-31 04:44:10.862325 | orchestrator | Tuesday 31 March 2026 04:44:09 +0000 (0:00:00.133) 0:09:42.194 ********* 2026-03-31 04:44:10.862336 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:44:10.862347 | orchestrator | 2026-03-31 04:44:10.862358 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-31 04:44:10.862386 | orchestrator | Tuesday 31 March 2026 04:44:09 +0000 (0:00:00.159) 0:09:42.354 ********* 2026-03-31 04:44:10.862397 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:10.862408 | orchestrator | 2026-03-31 04:44:10.862419 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-31 04:44:10.862437 | orchestrator | Tuesday 31 March 2026 04:44:09 +0000 (0:00:00.140) 0:09:42.494 ********* 2026-03-31 04:44:10.862448 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:10.862459 | orchestrator | 2026-03-31 04:44:10.862470 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-31 04:44:10.862481 | orchestrator | Tuesday 31 March 2026 04:44:09 +0000 (0:00:00.133) 0:09:42.627 ********* 2026-03-31 04:44:10.862492 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:10.862503 | orchestrator | 2026-03-31 04:44:10.862514 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-31 04:44:10.862525 | orchestrator | Tuesday 31 March 2026 04:44:10 +0000 (0:00:00.200) 0:09:42.827 ********* 2026-03-31 04:44:10.862535 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:10.862546 | orchestrator | 2026-03-31 04:44:10.862566 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-31 04:44:10.862577 | orchestrator | Tuesday 31 March 2026 04:44:10 +0000 (0:00:00.131) 0:09:42.959 ********* 2026-03-31 04:44:10.862588 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:10.862599 | orchestrator | 2026-03-31 04:44:10.862610 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-31 04:44:10.862621 | orchestrator | Tuesday 31 March 2026 04:44:10 +0000 (0:00:00.129) 0:09:43.089 ********* 2026-03-31 04:44:10.862642 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:44:22.607726 | orchestrator | 2026-03-31 04:44:22.607855 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-31 04:44:22.607873 | orchestrator | Tuesday 31 March 2026 04:44:10 +0000 (0:00:00.441) 0:09:43.530 ********* 2026-03-31 04:44:22.607885 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:44:22.607911 | orchestrator | 2026-03-31 04:44:22.607923 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-31 04:44:22.607934 | orchestrator | Tuesday 31 March 2026 04:44:10 +0000 (0:00:00.151) 0:09:43.682 ********* 2026-03-31 04:44:22.607945 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:44:22.607957 | orchestrator | 2026-03-31 04:44:22.607968 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-31 04:44:22.607979 | orchestrator | Tuesday 31 March 2026 04:44:11 +0000 (0:00:00.215) 0:09:43.898 ********* 2026-03-31 04:44:22.607990 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:22.608002 | orchestrator | 2026-03-31 04:44:22.608013 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-31 04:44:22.608024 | orchestrator | Tuesday 31 March 2026 04:44:11 +0000 (0:00:00.153) 0:09:44.051 ********* 2026-03-31 04:44:22.608035 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:22.608046 | orchestrator | 2026-03-31 04:44:22.608057 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-31 04:44:22.608068 | orchestrator | Tuesday 31 March 2026 04:44:11 +0000 (0:00:00.177) 0:09:44.229 ********* 2026-03-31 04:44:22.608080 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:22.608090 | orchestrator | 2026-03-31 04:44:22.608102 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-31 04:44:22.608113 | orchestrator | Tuesday 31 March 2026 04:44:11 +0000 (0:00:00.143) 0:09:44.372 ********* 2026-03-31 04:44:22.608124 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:22.608135 | orchestrator | 2026-03-31 04:44:22.608146 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-31 04:44:22.608160 | orchestrator | Tuesday 31 March 2026 04:44:11 +0000 (0:00:00.128) 0:09:44.501 ********* 2026-03-31 04:44:22.608173 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:22.608186 | orchestrator | 2026-03-31 04:44:22.608199 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-31 04:44:22.608212 | orchestrator | Tuesday 31 March 2026 04:44:11 +0000 (0:00:00.117) 0:09:44.618 ********* 2026-03-31 04:44:22.608225 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:22.608237 | orchestrator | 2026-03-31 04:44:22.608250 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-31 04:44:22.608262 | orchestrator | Tuesday 31 March 2026 04:44:12 +0000 (0:00:00.137) 0:09:44.755 ********* 2026-03-31 04:44:22.608274 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:22.608287 | orchestrator | 2026-03-31 04:44:22.608300 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-31 04:44:22.608313 | orchestrator | Tuesday 31 March 2026 04:44:12 +0000 (0:00:00.121) 0:09:44.877 ********* 2026-03-31 04:44:22.608325 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:22.608338 | orchestrator | 2026-03-31 04:44:22.608350 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-31 04:44:22.608363 | orchestrator | Tuesday 31 March 2026 04:44:12 +0000 (0:00:00.122) 0:09:45.000 ********* 2026-03-31 04:44:22.608375 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:22.608437 | orchestrator | 2026-03-31 04:44:22.608452 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-31 04:44:22.608465 | orchestrator | Tuesday 31 March 2026 04:44:12 +0000 (0:00:00.123) 0:09:45.124 ********* 2026-03-31 04:44:22.608477 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:22.608489 | orchestrator | 2026-03-31 04:44:22.608503 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-31 04:44:22.608517 | orchestrator | Tuesday 31 March 2026 04:44:12 +0000 (0:00:00.417) 0:09:45.541 ********* 2026-03-31 04:44:22.608529 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:22.608540 | orchestrator | 2026-03-31 04:44:22.608551 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-31 04:44:22.608562 | orchestrator | Tuesday 31 March 2026 04:44:12 +0000 (0:00:00.135) 0:09:45.676 ********* 2026-03-31 04:44:22.608573 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:22.608584 | orchestrator | 2026-03-31 04:44:22.608595 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-31 04:44:22.608606 | orchestrator | Tuesday 31 March 2026 04:44:13 +0000 (0:00:00.191) 0:09:45.868 ********* 2026-03-31 04:44:22.608617 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:44:22.608627 | orchestrator | 2026-03-31 04:44:22.608639 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-31 04:44:22.608650 | orchestrator | Tuesday 31 March 2026 04:44:14 +0000 (0:00:00.951) 0:09:46.820 ********* 2026-03-31 04:44:22.608661 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:44:22.608672 | orchestrator | 2026-03-31 04:44:22.608697 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-31 04:44:22.608708 | orchestrator | Tuesday 31 March 2026 04:44:15 +0000 (0:00:01.454) 0:09:48.275 ********* 2026-03-31 04:44:22.608719 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-03-31 04:44:22.608731 | orchestrator | 2026-03-31 04:44:22.608742 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-31 04:44:22.608753 | orchestrator | Tuesday 31 March 2026 04:44:15 +0000 (0:00:00.208) 0:09:48.484 ********* 2026-03-31 04:44:22.608764 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:22.608775 | orchestrator | 2026-03-31 04:44:22.608786 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-31 04:44:22.608797 | orchestrator | Tuesday 31 March 2026 04:44:15 +0000 (0:00:00.140) 0:09:48.624 ********* 2026-03-31 04:44:22.608808 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:22.608819 | orchestrator | 2026-03-31 04:44:22.608830 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-31 04:44:22.608846 | orchestrator | Tuesday 31 March 2026 04:44:16 +0000 (0:00:00.137) 0:09:48.761 ********* 2026-03-31 04:44:22.608888 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-31 04:44:22.608908 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-31 04:44:22.608925 | orchestrator | 2026-03-31 04:44:22.608944 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-31 04:44:22.608960 | orchestrator | Tuesday 31 March 2026 04:44:16 +0000 (0:00:00.809) 0:09:49.570 ********* 2026-03-31 04:44:22.608977 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:44:22.608997 | orchestrator | 2026-03-31 04:44:22.609015 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-31 04:44:22.609035 | orchestrator | Tuesday 31 March 2026 04:44:17 +0000 (0:00:00.489) 0:09:50.060 ********* 2026-03-31 04:44:22.609054 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:22.609072 | orchestrator | 2026-03-31 04:44:22.609092 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-31 04:44:22.609111 | orchestrator | Tuesday 31 March 2026 04:44:17 +0000 (0:00:00.157) 0:09:50.217 ********* 2026-03-31 04:44:22.609129 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:22.609148 | orchestrator | 2026-03-31 04:44:22.609167 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-31 04:44:22.609202 | orchestrator | Tuesday 31 March 2026 04:44:17 +0000 (0:00:00.409) 0:09:50.626 ********* 2026-03-31 04:44:22.609221 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:22.609240 | orchestrator | 2026-03-31 04:44:22.609259 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-31 04:44:22.609278 | orchestrator | Tuesday 31 March 2026 04:44:18 +0000 (0:00:00.133) 0:09:50.759 ********* 2026-03-31 04:44:22.609297 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-03-31 04:44:22.609315 | orchestrator | 2026-03-31 04:44:22.609335 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-31 04:44:22.609354 | orchestrator | Tuesday 31 March 2026 04:44:18 +0000 (0:00:00.208) 0:09:50.968 ********* 2026-03-31 04:44:22.609373 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:44:22.609421 | orchestrator | 2026-03-31 04:44:22.609443 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-31 04:44:22.609462 | orchestrator | Tuesday 31 March 2026 04:44:19 +0000 (0:00:00.764) 0:09:51.732 ********* 2026-03-31 04:44:22.609481 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-31 04:44:22.609500 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-31 04:44:22.609518 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-31 04:44:22.609537 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:22.609556 | orchestrator | 2026-03-31 04:44:22.609574 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-31 04:44:22.609593 | orchestrator | Tuesday 31 March 2026 04:44:19 +0000 (0:00:00.149) 0:09:51.882 ********* 2026-03-31 04:44:22.609612 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:22.609631 | orchestrator | 2026-03-31 04:44:22.609650 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-31 04:44:22.609669 | orchestrator | Tuesday 31 March 2026 04:44:19 +0000 (0:00:00.130) 0:09:52.013 ********* 2026-03-31 04:44:22.609687 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:22.609705 | orchestrator | 2026-03-31 04:44:22.609724 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-31 04:44:22.609742 | orchestrator | Tuesday 31 March 2026 04:44:19 +0000 (0:00:00.174) 0:09:52.188 ********* 2026-03-31 04:44:22.609759 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:22.609777 | orchestrator | 2026-03-31 04:44:22.609795 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-31 04:44:22.609814 | orchestrator | Tuesday 31 March 2026 04:44:19 +0000 (0:00:00.144) 0:09:52.332 ********* 2026-03-31 04:44:22.609832 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:22.609849 | orchestrator | 2026-03-31 04:44:22.609866 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-31 04:44:22.609886 | orchestrator | Tuesday 31 March 2026 04:44:19 +0000 (0:00:00.136) 0:09:52.469 ********* 2026-03-31 04:44:22.609904 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:22.609923 | orchestrator | 2026-03-31 04:44:22.609941 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-31 04:44:22.609959 | orchestrator | Tuesday 31 March 2026 04:44:19 +0000 (0:00:00.165) 0:09:52.634 ********* 2026-03-31 04:44:22.609977 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:44:22.609997 | orchestrator | 2026-03-31 04:44:22.610139 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-31 04:44:22.610157 | orchestrator | Tuesday 31 March 2026 04:44:21 +0000 (0:00:01.541) 0:09:54.176 ********* 2026-03-31 04:44:22.610169 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:44:22.610180 | orchestrator | 2026-03-31 04:44:22.610200 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-31 04:44:22.610212 | orchestrator | Tuesday 31 March 2026 04:44:21 +0000 (0:00:00.132) 0:09:54.309 ********* 2026-03-31 04:44:22.610223 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-03-31 04:44:22.610244 | orchestrator | 2026-03-31 04:44:22.610255 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-31 04:44:22.610266 | orchestrator | Tuesday 31 March 2026 04:44:22 +0000 (0:00:00.523) 0:09:54.832 ********* 2026-03-31 04:44:22.610277 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:22.610288 | orchestrator | 2026-03-31 04:44:22.610299 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-31 04:44:22.610310 | orchestrator | Tuesday 31 March 2026 04:44:22 +0000 (0:00:00.152) 0:09:54.985 ********* 2026-03-31 04:44:22.610320 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:22.610331 | orchestrator | 2026-03-31 04:44:22.610342 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-31 04:44:22.610353 | orchestrator | Tuesday 31 March 2026 04:44:22 +0000 (0:00:00.147) 0:09:55.133 ********* 2026-03-31 04:44:22.610365 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:22.610376 | orchestrator | 2026-03-31 04:44:22.610473 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-31 04:44:34.819951 | orchestrator | Tuesday 31 March 2026 04:44:22 +0000 (0:00:00.142) 0:09:55.275 ********* 2026-03-31 04:44:34.820073 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:34.820091 | orchestrator | 2026-03-31 04:44:34.820104 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-31 04:44:34.820116 | orchestrator | Tuesday 31 March 2026 04:44:22 +0000 (0:00:00.157) 0:09:55.432 ********* 2026-03-31 04:44:34.820127 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:34.820141 | orchestrator | 2026-03-31 04:44:34.820159 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-31 04:44:34.820177 | orchestrator | Tuesday 31 March 2026 04:44:22 +0000 (0:00:00.164) 0:09:55.597 ********* 2026-03-31 04:44:34.820195 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:34.820214 | orchestrator | 2026-03-31 04:44:34.820228 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-31 04:44:34.820240 | orchestrator | Tuesday 31 March 2026 04:44:23 +0000 (0:00:00.144) 0:09:55.741 ********* 2026-03-31 04:44:34.820251 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:34.820262 | orchestrator | 2026-03-31 04:44:34.820273 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-31 04:44:34.820285 | orchestrator | Tuesday 31 March 2026 04:44:23 +0000 (0:00:00.147) 0:09:55.889 ********* 2026-03-31 04:44:34.820296 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:34.820307 | orchestrator | 2026-03-31 04:44:34.820318 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-31 04:44:34.820329 | orchestrator | Tuesday 31 March 2026 04:44:23 +0000 (0:00:00.142) 0:09:56.032 ********* 2026-03-31 04:44:34.820340 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:44:34.820352 | orchestrator | 2026-03-31 04:44:34.820363 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-31 04:44:34.820375 | orchestrator | Tuesday 31 March 2026 04:44:23 +0000 (0:00:00.215) 0:09:56.247 ********* 2026-03-31 04:44:34.820386 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-03-31 04:44:34.820398 | orchestrator | 2026-03-31 04:44:34.820409 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-31 04:44:34.820420 | orchestrator | Tuesday 31 March 2026 04:44:23 +0000 (0:00:00.192) 0:09:56.439 ********* 2026-03-31 04:44:34.820431 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-03-31 04:44:34.820443 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-31 04:44:34.820454 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-31 04:44:34.820465 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-31 04:44:34.820476 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-31 04:44:34.820487 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-31 04:44:34.820498 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-31 04:44:34.820580 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-31 04:44:34.820594 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-31 04:44:34.820605 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-31 04:44:34.820617 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-31 04:44:34.820628 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-31 04:44:34.820639 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-31 04:44:34.820650 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-31 04:44:34.820660 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-03-31 04:44:34.820671 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-03-31 04:44:34.820682 | orchestrator | 2026-03-31 04:44:34.820693 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-31 04:44:34.820704 | orchestrator | Tuesday 31 March 2026 04:44:29 +0000 (0:00:05.786) 0:10:02.225 ********* 2026-03-31 04:44:34.820715 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:34.820725 | orchestrator | 2026-03-31 04:44:34.820737 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-31 04:44:34.820747 | orchestrator | Tuesday 31 March 2026 04:44:29 +0000 (0:00:00.121) 0:10:02.346 ********* 2026-03-31 04:44:34.820758 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:34.820769 | orchestrator | 2026-03-31 04:44:34.820780 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-31 04:44:34.820790 | orchestrator | Tuesday 31 March 2026 04:44:29 +0000 (0:00:00.138) 0:10:02.485 ********* 2026-03-31 04:44:34.820801 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:34.820812 | orchestrator | 2026-03-31 04:44:34.820839 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-31 04:44:34.820850 | orchestrator | Tuesday 31 March 2026 04:44:29 +0000 (0:00:00.162) 0:10:02.647 ********* 2026-03-31 04:44:34.820861 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:34.820872 | orchestrator | 2026-03-31 04:44:34.820883 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-31 04:44:34.820894 | orchestrator | Tuesday 31 March 2026 04:44:30 +0000 (0:00:00.159) 0:10:02.806 ********* 2026-03-31 04:44:34.820905 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:34.820916 | orchestrator | 2026-03-31 04:44:34.820928 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-31 04:44:34.820939 | orchestrator | Tuesday 31 March 2026 04:44:30 +0000 (0:00:00.113) 0:10:02.920 ********* 2026-03-31 04:44:34.820950 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:34.820961 | orchestrator | 2026-03-31 04:44:34.820972 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-31 04:44:34.820984 | orchestrator | Tuesday 31 March 2026 04:44:30 +0000 (0:00:00.136) 0:10:03.057 ********* 2026-03-31 04:44:34.820995 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:34.821006 | orchestrator | 2026-03-31 04:44:34.821037 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-31 04:44:34.821050 | orchestrator | Tuesday 31 March 2026 04:44:30 +0000 (0:00:00.164) 0:10:03.221 ********* 2026-03-31 04:44:34.821061 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:34.821072 | orchestrator | 2026-03-31 04:44:34.821083 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-31 04:44:34.821094 | orchestrator | Tuesday 31 March 2026 04:44:30 +0000 (0:00:00.129) 0:10:03.350 ********* 2026-03-31 04:44:34.821105 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:34.821116 | orchestrator | 2026-03-31 04:44:34.821127 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-31 04:44:34.821138 | orchestrator | Tuesday 31 March 2026 04:44:30 +0000 (0:00:00.151) 0:10:03.502 ********* 2026-03-31 04:44:34.821158 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:34.821169 | orchestrator | 2026-03-31 04:44:34.821180 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-31 04:44:34.821192 | orchestrator | Tuesday 31 March 2026 04:44:30 +0000 (0:00:00.143) 0:10:03.646 ********* 2026-03-31 04:44:34.821203 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:34.821214 | orchestrator | 2026-03-31 04:44:34.821225 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-31 04:44:34.821236 | orchestrator | Tuesday 31 March 2026 04:44:31 +0000 (0:00:00.143) 0:10:03.789 ********* 2026-03-31 04:44:34.821247 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:34.821258 | orchestrator | 2026-03-31 04:44:34.821269 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-31 04:44:34.821280 | orchestrator | Tuesday 31 March 2026 04:44:31 +0000 (0:00:00.133) 0:10:03.922 ********* 2026-03-31 04:44:34.821291 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:34.821302 | orchestrator | 2026-03-31 04:44:34.821313 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-31 04:44:34.821324 | orchestrator | Tuesday 31 March 2026 04:44:32 +0000 (0:00:00.844) 0:10:04.767 ********* 2026-03-31 04:44:34.821335 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:34.821346 | orchestrator | 2026-03-31 04:44:34.821357 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-31 04:44:34.821368 | orchestrator | Tuesday 31 March 2026 04:44:32 +0000 (0:00:00.136) 0:10:04.903 ********* 2026-03-31 04:44:34.821379 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:34.821390 | orchestrator | 2026-03-31 04:44:34.821401 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-31 04:44:34.821412 | orchestrator | Tuesday 31 March 2026 04:44:32 +0000 (0:00:00.228) 0:10:05.132 ********* 2026-03-31 04:44:34.821423 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:34.821434 | orchestrator | 2026-03-31 04:44:34.821445 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-31 04:44:34.821457 | orchestrator | Tuesday 31 March 2026 04:44:32 +0000 (0:00:00.140) 0:10:05.273 ********* 2026-03-31 04:44:34.821467 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:34.821479 | orchestrator | 2026-03-31 04:44:34.821490 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-31 04:44:34.821503 | orchestrator | Tuesday 31 March 2026 04:44:32 +0000 (0:00:00.124) 0:10:05.397 ********* 2026-03-31 04:44:34.821513 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:34.821524 | orchestrator | 2026-03-31 04:44:34.821559 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-31 04:44:34.821571 | orchestrator | Tuesday 31 March 2026 04:44:32 +0000 (0:00:00.141) 0:10:05.539 ********* 2026-03-31 04:44:34.821582 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:34.821593 | orchestrator | 2026-03-31 04:44:34.821604 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-31 04:44:34.821615 | orchestrator | Tuesday 31 March 2026 04:44:33 +0000 (0:00:00.145) 0:10:05.685 ********* 2026-03-31 04:44:34.821626 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:34.821637 | orchestrator | 2026-03-31 04:44:34.821648 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-31 04:44:34.821659 | orchestrator | Tuesday 31 March 2026 04:44:33 +0000 (0:00:00.147) 0:10:05.832 ********* 2026-03-31 04:44:34.821670 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:34.821681 | orchestrator | 2026-03-31 04:44:34.821692 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-31 04:44:34.821703 | orchestrator | Tuesday 31 March 2026 04:44:33 +0000 (0:00:00.128) 0:10:05.960 ********* 2026-03-31 04:44:34.821714 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-31 04:44:34.821725 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-31 04:44:34.821748 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-31 04:44:34.821759 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:34.821770 | orchestrator | 2026-03-31 04:44:34.821781 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-31 04:44:34.821792 | orchestrator | Tuesday 31 March 2026 04:44:33 +0000 (0:00:00.392) 0:10:06.353 ********* 2026-03-31 04:44:34.821803 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-31 04:44:34.821814 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-31 04:44:34.821825 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-31 04:44:34.821836 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:34.821847 | orchestrator | 2026-03-31 04:44:34.821858 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-31 04:44:34.821869 | orchestrator | Tuesday 31 March 2026 04:44:34 +0000 (0:00:00.401) 0:10:06.754 ********* 2026-03-31 04:44:34.821880 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-31 04:44:34.821891 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-31 04:44:34.821902 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-31 04:44:34.821913 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:44:34.821924 | orchestrator | 2026-03-31 04:44:34.821942 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-31 04:45:03.238137 | orchestrator | Tuesday 31 March 2026 04:44:34 +0000 (0:00:00.729) 0:10:07.484 ********* 2026-03-31 04:45:03.238261 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:45:03.238280 | orchestrator | 2026-03-31 04:45:03.238297 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-31 04:45:03.238313 | orchestrator | Tuesday 31 March 2026 04:44:34 +0000 (0:00:00.143) 0:10:07.627 ********* 2026-03-31 04:45:03.238329 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-31 04:45:03.238343 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:45:03.238357 | orchestrator | 2026-03-31 04:45:03.238372 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-31 04:45:03.238386 | orchestrator | Tuesday 31 March 2026 04:44:35 +0000 (0:00:01.017) 0:10:08.645 ********* 2026-03-31 04:45:03.238400 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:45:03.238415 | orchestrator | 2026-03-31 04:45:03.238429 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-31 04:45:03.238443 | orchestrator | Tuesday 31 March 2026 04:44:36 +0000 (0:00:00.824) 0:10:09.469 ********* 2026-03-31 04:45:03.238458 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-31 04:45:03.238472 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:45:03.238486 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:45:03.238501 | orchestrator | 2026-03-31 04:45:03.238515 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-31 04:45:03.238529 | orchestrator | Tuesday 31 March 2026 04:44:37 +0000 (0:00:00.693) 0:10:10.163 ********* 2026-03-31 04:45:03.238543 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0 2026-03-31 04:45:03.238557 | orchestrator | 2026-03-31 04:45:03.238571 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-31 04:45:03.238585 | orchestrator | Tuesday 31 March 2026 04:44:37 +0000 (0:00:00.210) 0:10:10.373 ********* 2026-03-31 04:45:03.238599 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:45:03.238614 | orchestrator | 2026-03-31 04:45:03.238629 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-31 04:45:03.238645 | orchestrator | Tuesday 31 March 2026 04:44:38 +0000 (0:00:00.496) 0:10:10.870 ********* 2026-03-31 04:45:03.238660 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:45:03.238676 | orchestrator | 2026-03-31 04:45:03.238692 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-31 04:45:03.238707 | orchestrator | Tuesday 31 March 2026 04:44:38 +0000 (0:00:00.123) 0:10:10.993 ********* 2026-03-31 04:45:03.238754 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-31 04:45:03.238770 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-31 04:45:03.238838 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-31 04:45:03.238855 | orchestrator | ok: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-03-31 04:45:03.238870 | orchestrator | 2026-03-31 04:45:03.238886 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-31 04:45:03.238901 | orchestrator | Tuesday 31 March 2026 04:44:44 +0000 (0:00:06.187) 0:10:17.180 ********* 2026-03-31 04:45:03.238916 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:45:03.238931 | orchestrator | 2026-03-31 04:45:03.238946 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-31 04:45:03.238962 | orchestrator | Tuesday 31 March 2026 04:44:44 +0000 (0:00:00.189) 0:10:17.370 ********* 2026-03-31 04:45:03.238976 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-31 04:45:03.238990 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-31 04:45:03.239004 | orchestrator | 2026-03-31 04:45:03.239018 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-31 04:45:03.239031 | orchestrator | Tuesday 31 March 2026 04:44:46 +0000 (0:00:02.252) 0:10:19.623 ********* 2026-03-31 04:45:03.239045 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-31 04:45:03.239059 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-31 04:45:03.239073 | orchestrator | 2026-03-31 04:45:03.239087 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-31 04:45:03.239101 | orchestrator | Tuesday 31 March 2026 04:44:47 +0000 (0:00:00.963) 0:10:20.587 ********* 2026-03-31 04:45:03.239115 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:45:03.239129 | orchestrator | 2026-03-31 04:45:03.239144 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-31 04:45:03.239158 | orchestrator | Tuesday 31 March 2026 04:44:48 +0000 (0:00:00.767) 0:10:21.354 ********* 2026-03-31 04:45:03.239171 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:45:03.239185 | orchestrator | 2026-03-31 04:45:03.239215 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-31 04:45:03.239230 | orchestrator | Tuesday 31 March 2026 04:44:48 +0000 (0:00:00.139) 0:10:21.493 ********* 2026-03-31 04:45:03.239244 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:45:03.239258 | orchestrator | 2026-03-31 04:45:03.239272 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-31 04:45:03.239286 | orchestrator | Tuesday 31 March 2026 04:44:48 +0000 (0:00:00.159) 0:10:21.653 ********* 2026-03-31 04:45:03.239300 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0 2026-03-31 04:45:03.239313 | orchestrator | 2026-03-31 04:45:03.239327 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-31 04:45:03.239341 | orchestrator | Tuesday 31 March 2026 04:44:49 +0000 (0:00:00.218) 0:10:21.872 ********* 2026-03-31 04:45:03.239355 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:45:03.239369 | orchestrator | 2026-03-31 04:45:03.239383 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-31 04:45:03.239397 | orchestrator | Tuesday 31 March 2026 04:44:49 +0000 (0:00:00.152) 0:10:22.025 ********* 2026-03-31 04:45:03.239411 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:45:03.239425 | orchestrator | 2026-03-31 04:45:03.239439 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-31 04:45:03.239473 | orchestrator | Tuesday 31 March 2026 04:44:49 +0000 (0:00:00.147) 0:10:22.172 ********* 2026-03-31 04:45:03.239488 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0 2026-03-31 04:45:03.239502 | orchestrator | 2026-03-31 04:45:03.239516 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-31 04:45:03.239530 | orchestrator | Tuesday 31 March 2026 04:44:49 +0000 (0:00:00.225) 0:10:22.398 ********* 2026-03-31 04:45:03.239554 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:45:03.239568 | orchestrator | 2026-03-31 04:45:03.239582 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-31 04:45:03.239596 | orchestrator | Tuesday 31 March 2026 04:44:50 +0000 (0:00:01.013) 0:10:23.411 ********* 2026-03-31 04:45:03.239610 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:45:03.239623 | orchestrator | 2026-03-31 04:45:03.239637 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-31 04:45:03.239651 | orchestrator | Tuesday 31 March 2026 04:44:51 +0000 (0:00:00.947) 0:10:24.359 ********* 2026-03-31 04:45:03.239665 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:45:03.239679 | orchestrator | 2026-03-31 04:45:03.239693 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-31 04:45:03.239707 | orchestrator | Tuesday 31 March 2026 04:44:53 +0000 (0:00:01.374) 0:10:25.734 ********* 2026-03-31 04:45:03.239721 | orchestrator | changed: [testbed-node-0] 2026-03-31 04:45:03.239735 | orchestrator | 2026-03-31 04:45:03.239749 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-31 04:45:03.239763 | orchestrator | Tuesday 31 March 2026 04:44:55 +0000 (0:00:02.748) 0:10:28.482 ********* 2026-03-31 04:45:03.239799 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:45:03.239813 | orchestrator | 2026-03-31 04:45:03.239828 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-03-31 04:45:03.239843 | orchestrator | 2026-03-31 04:45:03.239858 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-03-31 04:45:03.239873 | orchestrator | Tuesday 31 March 2026 04:44:56 +0000 (0:00:00.522) 0:10:29.005 ********* 2026-03-31 04:45:03.239887 | orchestrator | changed: [testbed-node-1] 2026-03-31 04:45:03.239902 | orchestrator | 2026-03-31 04:45:03.239917 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-03-31 04:45:03.239932 | orchestrator | Tuesday 31 March 2026 04:44:58 +0000 (0:00:01.948) 0:10:30.954 ********* 2026-03-31 04:45:03.239947 | orchestrator | changed: [testbed-node-1] 2026-03-31 04:45:03.239962 | orchestrator | 2026-03-31 04:45:03.239976 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-31 04:45:03.239991 | orchestrator | Tuesday 31 March 2026 04:44:59 +0000 (0:00:01.494) 0:10:32.448 ********* 2026-03-31 04:45:03.240006 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-03-31 04:45:03.240020 | orchestrator | 2026-03-31 04:45:03.240035 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-31 04:45:03.240050 | orchestrator | Tuesday 31 March 2026 04:45:00 +0000 (0:00:00.292) 0:10:32.741 ********* 2026-03-31 04:45:03.240065 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:45:03.240079 | orchestrator | 2026-03-31 04:45:03.240094 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-31 04:45:03.240109 | orchestrator | Tuesday 31 March 2026 04:45:00 +0000 (0:00:00.460) 0:10:33.201 ********* 2026-03-31 04:45:03.240124 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:45:03.240138 | orchestrator | 2026-03-31 04:45:03.240153 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-31 04:45:03.240167 | orchestrator | Tuesday 31 March 2026 04:45:00 +0000 (0:00:00.154) 0:10:33.355 ********* 2026-03-31 04:45:03.240182 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:45:03.240197 | orchestrator | 2026-03-31 04:45:03.240212 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-31 04:45:03.240227 | orchestrator | Tuesday 31 March 2026 04:45:01 +0000 (0:00:00.459) 0:10:33.815 ********* 2026-03-31 04:45:03.240241 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:45:03.240256 | orchestrator | 2026-03-31 04:45:03.240270 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-31 04:45:03.240285 | orchestrator | Tuesday 31 March 2026 04:45:01 +0000 (0:00:00.158) 0:10:33.973 ********* 2026-03-31 04:45:03.240300 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:45:03.240315 | orchestrator | 2026-03-31 04:45:03.240329 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-31 04:45:03.240352 | orchestrator | Tuesday 31 March 2026 04:45:01 +0000 (0:00:00.182) 0:10:34.155 ********* 2026-03-31 04:45:03.240367 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:45:03.240380 | orchestrator | 2026-03-31 04:45:03.240392 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-31 04:45:03.240405 | orchestrator | Tuesday 31 March 2026 04:45:01 +0000 (0:00:00.166) 0:10:34.322 ********* 2026-03-31 04:45:03.240422 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:03.240435 | orchestrator | 2026-03-31 04:45:03.240447 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-31 04:45:03.240460 | orchestrator | Tuesday 31 March 2026 04:45:02 +0000 (0:00:00.466) 0:10:34.788 ********* 2026-03-31 04:45:03.240472 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:45:03.240485 | orchestrator | 2026-03-31 04:45:03.240497 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-31 04:45:03.240509 | orchestrator | Tuesday 31 March 2026 04:45:02 +0000 (0:00:00.136) 0:10:34.925 ********* 2026-03-31 04:45:03.240520 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:45:03.240531 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-31 04:45:03.240541 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:45:03.240552 | orchestrator | 2026-03-31 04:45:03.240563 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-31 04:45:03.240575 | orchestrator | Tuesday 31 March 2026 04:45:02 +0000 (0:00:00.717) 0:10:35.643 ********* 2026-03-31 04:45:03.240588 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:45:03.240600 | orchestrator | 2026-03-31 04:45:03.240613 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-31 04:45:03.240633 | orchestrator | Tuesday 31 March 2026 04:45:03 +0000 (0:00:00.262) 0:10:35.905 ********* 2026-03-31 04:45:10.141975 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:45:10.142150 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-31 04:45:10.142165 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:45:10.142175 | orchestrator | 2026-03-31 04:45:10.142185 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-31 04:45:10.142194 | orchestrator | Tuesday 31 March 2026 04:45:05 +0000 (0:00:01.867) 0:10:37.773 ********* 2026-03-31 04:45:10.142203 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-31 04:45:10.142212 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-31 04:45:10.142220 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-31 04:45:10.142228 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:10.142237 | orchestrator | 2026-03-31 04:45:10.142245 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-31 04:45:10.142253 | orchestrator | Tuesday 31 March 2026 04:45:05 +0000 (0:00:00.421) 0:10:38.194 ********* 2026-03-31 04:45:10.142263 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-31 04:45:10.142275 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-31 04:45:10.142283 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-31 04:45:10.142292 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:10.142300 | orchestrator | 2026-03-31 04:45:10.142327 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-31 04:45:10.142336 | orchestrator | Tuesday 31 March 2026 04:45:06 +0000 (0:00:00.660) 0:10:38.855 ********* 2026-03-31 04:45:10.142346 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 04:45:10.142357 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 04:45:10.142366 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 04:45:10.142375 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:10.142383 | orchestrator | 2026-03-31 04:45:10.142403 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-31 04:45:10.142412 | orchestrator | Tuesday 31 March 2026 04:45:06 +0000 (0:00:00.170) 0:10:39.025 ********* 2026-03-31 04:45:10.142422 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '2a470704af4f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-31 04:45:03.751017', 'end': '2026-03-31 04:45:03.800795', 'delta': '0:00:00.049778', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2a470704af4f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-31 04:45:10.142452 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '72281537ffe8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-31 04:45:04.336368', 'end': '2026-03-31 04:45:04.384265', 'delta': '0:00:00.047897', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['72281537ffe8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-31 04:45:10.142461 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '4f3969f3506a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-31 04:45:04.893987', 'end': '2026-03-31 04:45:04.942736', 'delta': '0:00:00.048749', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['4f3969f3506a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-31 04:45:10.142476 | orchestrator | 2026-03-31 04:45:10.142484 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-31 04:45:10.142493 | orchestrator | Tuesday 31 March 2026 04:45:06 +0000 (0:00:00.208) 0:10:39.234 ********* 2026-03-31 04:45:10.142501 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:45:10.142515 | orchestrator | 2026-03-31 04:45:10.142529 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-31 04:45:10.142543 | orchestrator | Tuesday 31 March 2026 04:45:06 +0000 (0:00:00.274) 0:10:39.508 ********* 2026-03-31 04:45:10.142557 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:10.142571 | orchestrator | 2026-03-31 04:45:10.142585 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-31 04:45:10.142598 | orchestrator | Tuesday 31 March 2026 04:45:07 +0000 (0:00:00.238) 0:10:39.747 ********* 2026-03-31 04:45:10.142611 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:45:10.142625 | orchestrator | 2026-03-31 04:45:10.142640 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-31 04:45:10.142654 | orchestrator | Tuesday 31 March 2026 04:45:07 +0000 (0:00:00.138) 0:10:39.885 ********* 2026-03-31 04:45:10.142669 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-03-31 04:45:10.142684 | orchestrator | 2026-03-31 04:45:10.142699 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-31 04:45:10.142715 | orchestrator | Tuesday 31 March 2026 04:45:08 +0000 (0:00:01.271) 0:10:41.156 ********* 2026-03-31 04:45:10.142725 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:45:10.142735 | orchestrator | 2026-03-31 04:45:10.142744 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-31 04:45:10.142754 | orchestrator | Tuesday 31 March 2026 04:45:08 +0000 (0:00:00.145) 0:10:41.301 ********* 2026-03-31 04:45:10.142763 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:10.142772 | orchestrator | 2026-03-31 04:45:10.142849 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-31 04:45:10.142860 | orchestrator | Tuesday 31 March 2026 04:45:09 +0000 (0:00:00.408) 0:10:41.710 ********* 2026-03-31 04:45:10.142870 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:10.142879 | orchestrator | 2026-03-31 04:45:10.142887 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-31 04:45:10.142895 | orchestrator | Tuesday 31 March 2026 04:45:09 +0000 (0:00:00.250) 0:10:41.961 ********* 2026-03-31 04:45:10.142903 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:10.142911 | orchestrator | 2026-03-31 04:45:10.142919 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-31 04:45:10.142933 | orchestrator | Tuesday 31 March 2026 04:45:09 +0000 (0:00:00.134) 0:10:42.096 ********* 2026-03-31 04:45:10.142941 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:10.142950 | orchestrator | 2026-03-31 04:45:10.142957 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-31 04:45:10.142966 | orchestrator | Tuesday 31 March 2026 04:45:09 +0000 (0:00:00.145) 0:10:42.241 ********* 2026-03-31 04:45:10.142974 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:10.142981 | orchestrator | 2026-03-31 04:45:10.142989 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-31 04:45:10.142997 | orchestrator | Tuesday 31 March 2026 04:45:09 +0000 (0:00:00.136) 0:10:42.377 ********* 2026-03-31 04:45:10.143005 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:10.143013 | orchestrator | 2026-03-31 04:45:10.143021 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-31 04:45:10.143035 | orchestrator | Tuesday 31 March 2026 04:45:09 +0000 (0:00:00.119) 0:10:42.496 ********* 2026-03-31 04:45:10.143049 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:10.143062 | orchestrator | 2026-03-31 04:45:10.143076 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-31 04:45:10.143097 | orchestrator | Tuesday 31 March 2026 04:45:09 +0000 (0:00:00.146) 0:10:42.643 ********* 2026-03-31 04:45:10.143112 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:10.143126 | orchestrator | 2026-03-31 04:45:10.143141 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-31 04:45:10.143166 | orchestrator | Tuesday 31 March 2026 04:45:10 +0000 (0:00:00.170) 0:10:42.814 ********* 2026-03-31 04:45:10.738634 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:10.738753 | orchestrator | 2026-03-31 04:45:10.738775 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-31 04:45:10.738847 | orchestrator | Tuesday 31 March 2026 04:45:10 +0000 (0:00:00.133) 0:10:42.947 ********* 2026-03-31 04:45:10.738863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:45:10.738878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:45:10.738891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:45:10.738905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-31-01-38-51-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-31 04:45:10.738920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:45:10.738932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:45:10.738943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:45:10.739009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e', 'scsi-SQEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '47a85f4c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part16', 'scsi-SQEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part14', 'scsi-SQEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part15', 'scsi-SQEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part1', 'scsi-SQEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-31 04:45:10.739026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:45:10.739086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:45:10.739100 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:10.739112 | orchestrator | 2026-03-31 04:45:10.739123 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-31 04:45:10.739135 | orchestrator | Tuesday 31 March 2026 04:45:10 +0000 (0:00:00.245) 0:10:43.192 ********* 2026-03-31 04:45:10.739147 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:45:10.739166 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:45:10.739199 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:45:13.405830 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-31-01-38-51-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:45:13.405940 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:45:13.405957 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:45:13.405970 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:45:13.406138 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e', 'scsi-SQEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '47a85f4c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part16', 'scsi-SQEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part14', 'scsi-SQEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part15', 'scsi-SQEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part1', 'scsi-SQEMU_QEMU_HARDDISK_47a85f4c-1e56-4b37-90fc-526aac14af8e-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:45:13.406189 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:45:13.406202 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:45:13.406215 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:13.406229 | orchestrator | 2026-03-31 04:45:13.406242 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-31 04:45:13.406255 | orchestrator | Tuesday 31 March 2026 04:45:10 +0000 (0:00:00.219) 0:10:43.412 ********* 2026-03-31 04:45:13.406266 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:45:13.406278 | orchestrator | 2026-03-31 04:45:13.406289 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-31 04:45:13.406300 | orchestrator | Tuesday 31 March 2026 04:45:11 +0000 (0:00:00.454) 0:10:43.866 ********* 2026-03-31 04:45:13.406319 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:45:13.406332 | orchestrator | 2026-03-31 04:45:13.406344 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-31 04:45:13.406357 | orchestrator | Tuesday 31 March 2026 04:45:11 +0000 (0:00:00.130) 0:10:43.996 ********* 2026-03-31 04:45:13.406369 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:45:13.406382 | orchestrator | 2026-03-31 04:45:13.406394 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-31 04:45:13.406413 | orchestrator | Tuesday 31 March 2026 04:45:12 +0000 (0:00:00.697) 0:10:44.694 ********* 2026-03-31 04:45:13.406426 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:13.406439 | orchestrator | 2026-03-31 04:45:13.406451 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-31 04:45:13.406464 | orchestrator | Tuesday 31 March 2026 04:45:12 +0000 (0:00:00.123) 0:10:44.817 ********* 2026-03-31 04:45:13.406477 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:13.406488 | orchestrator | 2026-03-31 04:45:13.406500 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-31 04:45:13.406510 | orchestrator | Tuesday 31 March 2026 04:45:12 +0000 (0:00:00.246) 0:10:45.064 ********* 2026-03-31 04:45:13.406521 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:13.406533 | orchestrator | 2026-03-31 04:45:13.406544 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-31 04:45:13.406555 | orchestrator | Tuesday 31 March 2026 04:45:12 +0000 (0:00:00.156) 0:10:45.220 ********* 2026-03-31 04:45:13.406566 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-31 04:45:13.406577 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-31 04:45:13.406588 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-31 04:45:13.406599 | orchestrator | 2026-03-31 04:45:13.406610 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-31 04:45:13.406621 | orchestrator | Tuesday 31 March 2026 04:45:13 +0000 (0:00:00.669) 0:10:45.890 ********* 2026-03-31 04:45:13.406631 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-31 04:45:13.406643 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-31 04:45:13.406654 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-31 04:45:13.406665 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:13.406676 | orchestrator | 2026-03-31 04:45:13.406697 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-31 04:45:23.264248 | orchestrator | Tuesday 31 March 2026 04:45:13 +0000 (0:00:00.182) 0:10:46.072 ********* 2026-03-31 04:45:23.264368 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:23.264385 | orchestrator | 2026-03-31 04:45:23.264399 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-31 04:45:23.264411 | orchestrator | Tuesday 31 March 2026 04:45:13 +0000 (0:00:00.142) 0:10:46.215 ********* 2026-03-31 04:45:23.264422 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:45:23.264435 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-31 04:45:23.264446 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:45:23.264457 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-31 04:45:23.264468 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-31 04:45:23.264479 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-31 04:45:23.264490 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-31 04:45:23.264501 | orchestrator | 2026-03-31 04:45:23.264512 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-31 04:45:23.264523 | orchestrator | Tuesday 31 March 2026 04:45:14 +0000 (0:00:01.163) 0:10:47.378 ********* 2026-03-31 04:45:23.264558 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:45:23.264570 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-31 04:45:23.264581 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:45:23.264592 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-31 04:45:23.264603 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-31 04:45:23.264614 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-31 04:45:23.264625 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-31 04:45:23.264636 | orchestrator | 2026-03-31 04:45:23.264647 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-31 04:45:23.264658 | orchestrator | Tuesday 31 March 2026 04:45:16 +0000 (0:00:01.682) 0:10:49.061 ********* 2026-03-31 04:45:23.264669 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-03-31 04:45:23.264680 | orchestrator | 2026-03-31 04:45:23.264692 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-31 04:45:23.264703 | orchestrator | Tuesday 31 March 2026 04:45:16 +0000 (0:00:00.193) 0:10:49.255 ********* 2026-03-31 04:45:23.264714 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-03-31 04:45:23.264726 | orchestrator | 2026-03-31 04:45:23.264737 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-31 04:45:23.264747 | orchestrator | Tuesday 31 March 2026 04:45:17 +0000 (0:00:00.456) 0:10:49.711 ********* 2026-03-31 04:45:23.264759 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:45:23.264772 | orchestrator | 2026-03-31 04:45:23.264784 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-31 04:45:23.264821 | orchestrator | Tuesday 31 March 2026 04:45:17 +0000 (0:00:00.471) 0:10:50.183 ********* 2026-03-31 04:45:23.264834 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:23.264847 | orchestrator | 2026-03-31 04:45:23.264859 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-31 04:45:23.264872 | orchestrator | Tuesday 31 March 2026 04:45:17 +0000 (0:00:00.145) 0:10:50.329 ********* 2026-03-31 04:45:23.264884 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:23.264896 | orchestrator | 2026-03-31 04:45:23.264925 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-31 04:45:23.264938 | orchestrator | Tuesday 31 March 2026 04:45:17 +0000 (0:00:00.161) 0:10:50.490 ********* 2026-03-31 04:45:23.264950 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:23.264963 | orchestrator | 2026-03-31 04:45:23.264975 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-31 04:45:23.264987 | orchestrator | Tuesday 31 March 2026 04:45:17 +0000 (0:00:00.133) 0:10:50.624 ********* 2026-03-31 04:45:23.264999 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:45:23.265012 | orchestrator | 2026-03-31 04:45:23.265024 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-31 04:45:23.265037 | orchestrator | Tuesday 31 March 2026 04:45:18 +0000 (0:00:00.491) 0:10:51.115 ********* 2026-03-31 04:45:23.265050 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:23.265063 | orchestrator | 2026-03-31 04:45:23.265076 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-31 04:45:23.265088 | orchestrator | Tuesday 31 March 2026 04:45:18 +0000 (0:00:00.136) 0:10:51.252 ********* 2026-03-31 04:45:23.265101 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:23.265114 | orchestrator | 2026-03-31 04:45:23.265126 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-31 04:45:23.265137 | orchestrator | Tuesday 31 March 2026 04:45:18 +0000 (0:00:00.150) 0:10:51.402 ********* 2026-03-31 04:45:23.265148 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:45:23.265175 | orchestrator | 2026-03-31 04:45:23.265193 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-31 04:45:23.265211 | orchestrator | Tuesday 31 March 2026 04:45:19 +0000 (0:00:00.521) 0:10:51.923 ********* 2026-03-31 04:45:23.265230 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:45:23.265247 | orchestrator | 2026-03-31 04:45:23.265259 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-31 04:45:23.265290 | orchestrator | Tuesday 31 March 2026 04:45:19 +0000 (0:00:00.472) 0:10:52.396 ********* 2026-03-31 04:45:23.265302 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:23.265313 | orchestrator | 2026-03-31 04:45:23.265324 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-31 04:45:23.265335 | orchestrator | Tuesday 31 March 2026 04:45:19 +0000 (0:00:00.151) 0:10:52.547 ********* 2026-03-31 04:45:23.265345 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:45:23.265356 | orchestrator | 2026-03-31 04:45:23.265367 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-31 04:45:23.265378 | orchestrator | Tuesday 31 March 2026 04:45:20 +0000 (0:00:00.154) 0:10:52.701 ********* 2026-03-31 04:45:23.265389 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:23.265400 | orchestrator | 2026-03-31 04:45:23.265411 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-31 04:45:23.265421 | orchestrator | Tuesday 31 March 2026 04:45:20 +0000 (0:00:00.122) 0:10:52.824 ********* 2026-03-31 04:45:23.265432 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:23.265443 | orchestrator | 2026-03-31 04:45:23.265454 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-31 04:45:23.265465 | orchestrator | Tuesday 31 March 2026 04:45:20 +0000 (0:00:00.416) 0:10:53.241 ********* 2026-03-31 04:45:23.265476 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:23.265487 | orchestrator | 2026-03-31 04:45:23.265497 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-31 04:45:23.265508 | orchestrator | Tuesday 31 March 2026 04:45:20 +0000 (0:00:00.124) 0:10:53.365 ********* 2026-03-31 04:45:23.265519 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:23.265530 | orchestrator | 2026-03-31 04:45:23.265541 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-31 04:45:23.265552 | orchestrator | Tuesday 31 March 2026 04:45:20 +0000 (0:00:00.136) 0:10:53.502 ********* 2026-03-31 04:45:23.265563 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:23.265573 | orchestrator | 2026-03-31 04:45:23.265584 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-31 04:45:23.265595 | orchestrator | Tuesday 31 March 2026 04:45:20 +0000 (0:00:00.137) 0:10:53.640 ********* 2026-03-31 04:45:23.265606 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:45:23.265617 | orchestrator | 2026-03-31 04:45:23.265628 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-31 04:45:23.265639 | orchestrator | Tuesday 31 March 2026 04:45:21 +0000 (0:00:00.164) 0:10:53.804 ********* 2026-03-31 04:45:23.265650 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:45:23.265661 | orchestrator | 2026-03-31 04:45:23.265672 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-31 04:45:23.265683 | orchestrator | Tuesday 31 March 2026 04:45:21 +0000 (0:00:00.166) 0:10:53.971 ********* 2026-03-31 04:45:23.265694 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:45:23.265713 | orchestrator | 2026-03-31 04:45:23.265731 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-31 04:45:23.265749 | orchestrator | Tuesday 31 March 2026 04:45:21 +0000 (0:00:00.219) 0:10:54.190 ********* 2026-03-31 04:45:23.265768 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:23.265787 | orchestrator | 2026-03-31 04:45:23.265823 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-31 04:45:23.265834 | orchestrator | Tuesday 31 March 2026 04:45:21 +0000 (0:00:00.124) 0:10:54.315 ********* 2026-03-31 04:45:23.265845 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:23.265865 | orchestrator | 2026-03-31 04:45:23.265876 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-31 04:45:23.265887 | orchestrator | Tuesday 31 March 2026 04:45:21 +0000 (0:00:00.120) 0:10:54.435 ********* 2026-03-31 04:45:23.265898 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:23.265909 | orchestrator | 2026-03-31 04:45:23.265920 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-31 04:45:23.265931 | orchestrator | Tuesday 31 March 2026 04:45:21 +0000 (0:00:00.128) 0:10:54.564 ********* 2026-03-31 04:45:23.265942 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:23.265953 | orchestrator | 2026-03-31 04:45:23.265964 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-31 04:45:23.265974 | orchestrator | Tuesday 31 March 2026 04:45:22 +0000 (0:00:00.137) 0:10:54.701 ********* 2026-03-31 04:45:23.265992 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:23.266003 | orchestrator | 2026-03-31 04:45:23.266071 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-31 04:45:23.266086 | orchestrator | Tuesday 31 March 2026 04:45:22 +0000 (0:00:00.113) 0:10:54.815 ********* 2026-03-31 04:45:23.266097 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:23.266108 | orchestrator | 2026-03-31 04:45:23.266119 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-31 04:45:23.266131 | orchestrator | Tuesday 31 March 2026 04:45:22 +0000 (0:00:00.137) 0:10:54.952 ********* 2026-03-31 04:45:23.266142 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:23.266153 | orchestrator | 2026-03-31 04:45:23.266164 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-31 04:45:23.266175 | orchestrator | Tuesday 31 March 2026 04:45:22 +0000 (0:00:00.429) 0:10:55.381 ********* 2026-03-31 04:45:23.266186 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:23.266197 | orchestrator | 2026-03-31 04:45:23.266208 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-31 04:45:23.266219 | orchestrator | Tuesday 31 March 2026 04:45:22 +0000 (0:00:00.154) 0:10:55.535 ********* 2026-03-31 04:45:23.266230 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:23.266241 | orchestrator | 2026-03-31 04:45:23.266252 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-31 04:45:23.266263 | orchestrator | Tuesday 31 March 2026 04:45:22 +0000 (0:00:00.138) 0:10:55.674 ********* 2026-03-31 04:45:23.266274 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:23.266285 | orchestrator | 2026-03-31 04:45:23.266296 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-31 04:45:23.266307 | orchestrator | Tuesday 31 March 2026 04:45:23 +0000 (0:00:00.137) 0:10:55.812 ********* 2026-03-31 04:45:23.266318 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:23.266329 | orchestrator | 2026-03-31 04:45:23.266350 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-31 04:45:40.154253 | orchestrator | Tuesday 31 March 2026 04:45:23 +0000 (0:00:00.125) 0:10:55.938 ********* 2026-03-31 04:45:40.154373 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:40.154390 | orchestrator | 2026-03-31 04:45:40.154404 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-31 04:45:40.154416 | orchestrator | Tuesday 31 March 2026 04:45:23 +0000 (0:00:00.204) 0:10:56.142 ********* 2026-03-31 04:45:40.154427 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:45:40.154440 | orchestrator | 2026-03-31 04:45:40.154451 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-31 04:45:40.154462 | orchestrator | Tuesday 31 March 2026 04:45:24 +0000 (0:00:00.947) 0:10:57.090 ********* 2026-03-31 04:45:40.154474 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:45:40.154485 | orchestrator | 2026-03-31 04:45:40.154496 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-31 04:45:40.154507 | orchestrator | Tuesday 31 March 2026 04:45:25 +0000 (0:00:01.346) 0:10:58.437 ********* 2026-03-31 04:45:40.154518 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-03-31 04:45:40.154552 | orchestrator | 2026-03-31 04:45:40.154564 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-31 04:45:40.154575 | orchestrator | Tuesday 31 March 2026 04:45:25 +0000 (0:00:00.222) 0:10:58.659 ********* 2026-03-31 04:45:40.154586 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:40.154597 | orchestrator | 2026-03-31 04:45:40.154608 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-31 04:45:40.154619 | orchestrator | Tuesday 31 March 2026 04:45:26 +0000 (0:00:00.126) 0:10:58.786 ********* 2026-03-31 04:45:40.154630 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:40.154641 | orchestrator | 2026-03-31 04:45:40.154652 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-31 04:45:40.154663 | orchestrator | Tuesday 31 March 2026 04:45:26 +0000 (0:00:00.138) 0:10:58.924 ********* 2026-03-31 04:45:40.154674 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-31 04:45:40.154685 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-31 04:45:40.154697 | orchestrator | 2026-03-31 04:45:40.154708 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-31 04:45:40.154718 | orchestrator | Tuesday 31 March 2026 04:45:27 +0000 (0:00:01.167) 0:11:00.092 ********* 2026-03-31 04:45:40.154729 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:45:40.154740 | orchestrator | 2026-03-31 04:45:40.154751 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-31 04:45:40.154762 | orchestrator | Tuesday 31 March 2026 04:45:27 +0000 (0:00:00.502) 0:11:00.595 ********* 2026-03-31 04:45:40.154773 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:40.154786 | orchestrator | 2026-03-31 04:45:40.154798 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-31 04:45:40.154811 | orchestrator | Tuesday 31 March 2026 04:45:28 +0000 (0:00:00.158) 0:11:00.753 ********* 2026-03-31 04:45:40.154848 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:40.154861 | orchestrator | 2026-03-31 04:45:40.154874 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-31 04:45:40.154886 | orchestrator | Tuesday 31 March 2026 04:45:28 +0000 (0:00:00.131) 0:11:00.885 ********* 2026-03-31 04:45:40.154899 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:40.154917 | orchestrator | 2026-03-31 04:45:40.154936 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-31 04:45:40.154957 | orchestrator | Tuesday 31 March 2026 04:45:28 +0000 (0:00:00.125) 0:11:01.011 ********* 2026-03-31 04:45:40.154974 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-03-31 04:45:40.154991 | orchestrator | 2026-03-31 04:45:40.155010 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-31 04:45:40.155029 | orchestrator | Tuesday 31 March 2026 04:45:28 +0000 (0:00:00.227) 0:11:01.238 ********* 2026-03-31 04:45:40.155066 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:45:40.155086 | orchestrator | 2026-03-31 04:45:40.155106 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-31 04:45:40.155125 | orchestrator | Tuesday 31 March 2026 04:45:29 +0000 (0:00:00.725) 0:11:01.964 ********* 2026-03-31 04:45:40.155144 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-31 04:45:40.155162 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-31 04:45:40.155181 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-31 04:45:40.155193 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:40.155204 | orchestrator | 2026-03-31 04:45:40.155215 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-31 04:45:40.155226 | orchestrator | Tuesday 31 March 2026 04:45:29 +0000 (0:00:00.153) 0:11:02.117 ********* 2026-03-31 04:45:40.155237 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:40.155258 | orchestrator | 2026-03-31 04:45:40.155269 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-31 04:45:40.155280 | orchestrator | Tuesday 31 March 2026 04:45:29 +0000 (0:00:00.141) 0:11:02.259 ********* 2026-03-31 04:45:40.155292 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:40.155309 | orchestrator | 2026-03-31 04:45:40.155327 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-31 04:45:40.155347 | orchestrator | Tuesday 31 March 2026 04:45:29 +0000 (0:00:00.177) 0:11:02.436 ********* 2026-03-31 04:45:40.155367 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:40.155379 | orchestrator | 2026-03-31 04:45:40.155390 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-31 04:45:40.155401 | orchestrator | Tuesday 31 March 2026 04:45:29 +0000 (0:00:00.149) 0:11:02.585 ********* 2026-03-31 04:45:40.155412 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:40.155422 | orchestrator | 2026-03-31 04:45:40.155453 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-31 04:45:40.155465 | orchestrator | Tuesday 31 March 2026 04:45:30 +0000 (0:00:00.169) 0:11:02.755 ********* 2026-03-31 04:45:40.155476 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:40.155487 | orchestrator | 2026-03-31 04:45:40.155497 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-31 04:45:40.155508 | orchestrator | Tuesday 31 March 2026 04:45:30 +0000 (0:00:00.433) 0:11:03.188 ********* 2026-03-31 04:45:40.155519 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:45:40.155530 | orchestrator | 2026-03-31 04:45:40.155541 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-31 04:45:40.155551 | orchestrator | Tuesday 31 March 2026 04:45:32 +0000 (0:00:01.647) 0:11:04.835 ********* 2026-03-31 04:45:40.155562 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:45:40.155573 | orchestrator | 2026-03-31 04:45:40.155584 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-31 04:45:40.155595 | orchestrator | Tuesday 31 March 2026 04:45:32 +0000 (0:00:00.156) 0:11:04.992 ********* 2026-03-31 04:45:40.155606 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-03-31 04:45:40.155617 | orchestrator | 2026-03-31 04:45:40.155627 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-31 04:45:40.155638 | orchestrator | Tuesday 31 March 2026 04:45:32 +0000 (0:00:00.224) 0:11:05.217 ********* 2026-03-31 04:45:40.155649 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:40.155660 | orchestrator | 2026-03-31 04:45:40.155671 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-31 04:45:40.155682 | orchestrator | Tuesday 31 March 2026 04:45:32 +0000 (0:00:00.153) 0:11:05.370 ********* 2026-03-31 04:45:40.155693 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:40.155703 | orchestrator | 2026-03-31 04:45:40.155714 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-31 04:45:40.155725 | orchestrator | Tuesday 31 March 2026 04:45:32 +0000 (0:00:00.151) 0:11:05.522 ********* 2026-03-31 04:45:40.155736 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:40.155747 | orchestrator | 2026-03-31 04:45:40.155760 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-31 04:45:40.155779 | orchestrator | Tuesday 31 March 2026 04:45:32 +0000 (0:00:00.158) 0:11:05.681 ********* 2026-03-31 04:45:40.155795 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:40.155813 | orchestrator | 2026-03-31 04:45:40.155863 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-31 04:45:40.155883 | orchestrator | Tuesday 31 March 2026 04:45:33 +0000 (0:00:00.152) 0:11:05.834 ********* 2026-03-31 04:45:40.155895 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:40.155906 | orchestrator | 2026-03-31 04:45:40.155917 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-31 04:45:40.155928 | orchestrator | Tuesday 31 March 2026 04:45:33 +0000 (0:00:00.137) 0:11:05.971 ********* 2026-03-31 04:45:40.155949 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:40.155963 | orchestrator | 2026-03-31 04:45:40.155983 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-31 04:45:40.156002 | orchestrator | Tuesday 31 March 2026 04:45:33 +0000 (0:00:00.151) 0:11:06.123 ********* 2026-03-31 04:45:40.156021 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:40.156034 | orchestrator | 2026-03-31 04:45:40.156046 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-31 04:45:40.156056 | orchestrator | Tuesday 31 March 2026 04:45:33 +0000 (0:00:00.150) 0:11:06.274 ********* 2026-03-31 04:45:40.156067 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:40.156078 | orchestrator | 2026-03-31 04:45:40.156089 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-31 04:45:40.156100 | orchestrator | Tuesday 31 March 2026 04:45:33 +0000 (0:00:00.145) 0:11:06.419 ********* 2026-03-31 04:45:40.156111 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:45:40.156122 | orchestrator | 2026-03-31 04:45:40.156132 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-31 04:45:40.156143 | orchestrator | Tuesday 31 March 2026 04:45:34 +0000 (0:00:00.521) 0:11:06.940 ********* 2026-03-31 04:45:40.156161 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-03-31 04:45:40.156175 | orchestrator | 2026-03-31 04:45:40.156194 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-31 04:45:40.156213 | orchestrator | Tuesday 31 March 2026 04:45:34 +0000 (0:00:00.196) 0:11:07.137 ********* 2026-03-31 04:45:40.156230 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-03-31 04:45:40.156248 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-31 04:45:40.156265 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-31 04:45:40.156283 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-31 04:45:40.156300 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-31 04:45:40.156319 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-31 04:45:40.156338 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-31 04:45:40.156358 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-31 04:45:40.156377 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-31 04:45:40.156397 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-31 04:45:40.156409 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-31 04:45:40.156421 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-31 04:45:40.156431 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-31 04:45:40.156442 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-31 04:45:40.156453 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-03-31 04:45:40.156464 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-03-31 04:45:40.156475 | orchestrator | 2026-03-31 04:45:40.156501 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-31 04:45:57.608609 | orchestrator | Tuesday 31 March 2026 04:45:40 +0000 (0:00:05.677) 0:11:12.815 ********* 2026-03-31 04:45:57.608787 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:57.608818 | orchestrator | 2026-03-31 04:45:57.608837 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-31 04:45:57.608855 | orchestrator | Tuesday 31 March 2026 04:45:40 +0000 (0:00:00.134) 0:11:12.949 ********* 2026-03-31 04:45:57.608918 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:57.608936 | orchestrator | 2026-03-31 04:45:57.608953 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-31 04:45:57.608970 | orchestrator | Tuesday 31 March 2026 04:45:40 +0000 (0:00:00.158) 0:11:13.108 ********* 2026-03-31 04:45:57.608988 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:57.609031 | orchestrator | 2026-03-31 04:45:57.609050 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-31 04:45:57.609067 | orchestrator | Tuesday 31 March 2026 04:45:40 +0000 (0:00:00.118) 0:11:13.226 ********* 2026-03-31 04:45:57.609084 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:57.609101 | orchestrator | 2026-03-31 04:45:57.609118 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-31 04:45:57.609135 | orchestrator | Tuesday 31 March 2026 04:45:40 +0000 (0:00:00.112) 0:11:13.338 ********* 2026-03-31 04:45:57.609153 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:57.609170 | orchestrator | 2026-03-31 04:45:57.609186 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-31 04:45:57.609203 | orchestrator | Tuesday 31 March 2026 04:45:40 +0000 (0:00:00.128) 0:11:13.467 ********* 2026-03-31 04:45:57.609220 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:57.609236 | orchestrator | 2026-03-31 04:45:57.609254 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-31 04:45:57.609273 | orchestrator | Tuesday 31 March 2026 04:45:40 +0000 (0:00:00.131) 0:11:13.598 ********* 2026-03-31 04:45:57.609290 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:57.609306 | orchestrator | 2026-03-31 04:45:57.609323 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-31 04:45:57.609339 | orchestrator | Tuesday 31 March 2026 04:45:41 +0000 (0:00:00.137) 0:11:13.736 ********* 2026-03-31 04:45:57.609356 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:57.609374 | orchestrator | 2026-03-31 04:45:57.609391 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-31 04:45:57.609407 | orchestrator | Tuesday 31 March 2026 04:45:41 +0000 (0:00:00.131) 0:11:13.868 ********* 2026-03-31 04:45:57.609423 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:57.609440 | orchestrator | 2026-03-31 04:45:57.609456 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-31 04:45:57.609472 | orchestrator | Tuesday 31 March 2026 04:45:41 +0000 (0:00:00.417) 0:11:14.286 ********* 2026-03-31 04:45:57.609489 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:57.609505 | orchestrator | 2026-03-31 04:45:57.609521 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-31 04:45:57.609537 | orchestrator | Tuesday 31 March 2026 04:45:41 +0000 (0:00:00.134) 0:11:14.420 ********* 2026-03-31 04:45:57.609554 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:57.609570 | orchestrator | 2026-03-31 04:45:57.609587 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-31 04:45:57.609605 | orchestrator | Tuesday 31 March 2026 04:45:41 +0000 (0:00:00.144) 0:11:14.565 ********* 2026-03-31 04:45:57.609621 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:57.609637 | orchestrator | 2026-03-31 04:45:57.609654 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-31 04:45:57.609670 | orchestrator | Tuesday 31 March 2026 04:45:42 +0000 (0:00:00.137) 0:11:14.703 ********* 2026-03-31 04:45:57.609687 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:57.609703 | orchestrator | 2026-03-31 04:45:57.609720 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-31 04:45:57.609755 | orchestrator | Tuesday 31 March 2026 04:45:42 +0000 (0:00:00.257) 0:11:14.961 ********* 2026-03-31 04:45:57.609771 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:57.609787 | orchestrator | 2026-03-31 04:45:57.609803 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-31 04:45:57.609819 | orchestrator | Tuesday 31 March 2026 04:45:42 +0000 (0:00:00.158) 0:11:15.120 ********* 2026-03-31 04:45:57.609836 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:57.609853 | orchestrator | 2026-03-31 04:45:57.609889 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-31 04:45:57.609902 | orchestrator | Tuesday 31 March 2026 04:45:42 +0000 (0:00:00.239) 0:11:15.359 ********* 2026-03-31 04:45:57.609925 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:57.609938 | orchestrator | 2026-03-31 04:45:57.609952 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-31 04:45:57.609965 | orchestrator | Tuesday 31 March 2026 04:45:42 +0000 (0:00:00.152) 0:11:15.512 ********* 2026-03-31 04:45:57.609979 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:57.609992 | orchestrator | 2026-03-31 04:45:57.610006 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-31 04:45:57.610092 | orchestrator | Tuesday 31 March 2026 04:45:42 +0000 (0:00:00.146) 0:11:15.659 ********* 2026-03-31 04:45:57.610105 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:57.610119 | orchestrator | 2026-03-31 04:45:57.610132 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-31 04:45:57.610146 | orchestrator | Tuesday 31 March 2026 04:45:43 +0000 (0:00:00.138) 0:11:15.798 ********* 2026-03-31 04:45:57.610160 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:57.610173 | orchestrator | 2026-03-31 04:45:57.610185 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-31 04:45:57.610194 | orchestrator | Tuesday 31 March 2026 04:45:43 +0000 (0:00:00.137) 0:11:15.935 ********* 2026-03-31 04:45:57.610202 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:57.610210 | orchestrator | 2026-03-31 04:45:57.610237 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-31 04:45:57.610245 | orchestrator | Tuesday 31 March 2026 04:45:43 +0000 (0:00:00.134) 0:11:16.069 ********* 2026-03-31 04:45:57.610253 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:57.610262 | orchestrator | 2026-03-31 04:45:57.610270 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-31 04:45:57.610278 | orchestrator | Tuesday 31 March 2026 04:45:43 +0000 (0:00:00.132) 0:11:16.201 ********* 2026-03-31 04:45:57.610286 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-31 04:45:57.610294 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-31 04:45:57.610302 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-31 04:45:57.610310 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:57.610318 | orchestrator | 2026-03-31 04:45:57.610327 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-31 04:45:57.610335 | orchestrator | Tuesday 31 March 2026 04:45:44 +0000 (0:00:00.703) 0:11:16.905 ********* 2026-03-31 04:45:57.610343 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-31 04:45:57.610351 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-31 04:45:57.610359 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-31 04:45:57.610367 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:57.610375 | orchestrator | 2026-03-31 04:45:57.610383 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-31 04:45:57.610391 | orchestrator | Tuesday 31 March 2026 04:45:45 +0000 (0:00:01.034) 0:11:17.939 ********* 2026-03-31 04:45:57.610399 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-31 04:45:57.610407 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-31 04:45:57.610415 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-31 04:45:57.610423 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:57.610431 | orchestrator | 2026-03-31 04:45:57.610439 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-31 04:45:57.610447 | orchestrator | Tuesday 31 March 2026 04:45:45 +0000 (0:00:00.444) 0:11:18.383 ********* 2026-03-31 04:45:57.610455 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:57.610463 | orchestrator | 2026-03-31 04:45:57.610472 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-31 04:45:57.610480 | orchestrator | Tuesday 31 March 2026 04:45:45 +0000 (0:00:00.158) 0:11:18.542 ********* 2026-03-31 04:45:57.610498 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-31 04:45:57.610507 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:57.610515 | orchestrator | 2026-03-31 04:45:57.610523 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-31 04:45:57.610532 | orchestrator | Tuesday 31 March 2026 04:45:46 +0000 (0:00:00.337) 0:11:18.879 ********* 2026-03-31 04:45:57.610546 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:45:57.610559 | orchestrator | 2026-03-31 04:45:57.610571 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-31 04:45:57.610584 | orchestrator | Tuesday 31 March 2026 04:45:47 +0000 (0:00:00.853) 0:11:19.733 ********* 2026-03-31 04:45:57.610597 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:45:57.610612 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-31 04:45:57.610626 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:45:57.610639 | orchestrator | 2026-03-31 04:45:57.610652 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-31 04:45:57.610665 | orchestrator | Tuesday 31 March 2026 04:45:47 +0000 (0:00:00.647) 0:11:20.380 ********* 2026-03-31 04:45:57.610679 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-1 2026-03-31 04:45:57.610692 | orchestrator | 2026-03-31 04:45:57.610713 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-31 04:45:57.610728 | orchestrator | Tuesday 31 March 2026 04:45:47 +0000 (0:00:00.215) 0:11:20.596 ********* 2026-03-31 04:45:57.610742 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:45:57.610755 | orchestrator | 2026-03-31 04:45:57.610768 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-31 04:45:57.610782 | orchestrator | Tuesday 31 March 2026 04:45:48 +0000 (0:00:00.495) 0:11:21.091 ********* 2026-03-31 04:45:57.610797 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:45:57.610812 | orchestrator | 2026-03-31 04:45:57.610824 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-31 04:45:57.610837 | orchestrator | Tuesday 31 March 2026 04:45:48 +0000 (0:00:00.167) 0:11:21.259 ********* 2026-03-31 04:45:57.610849 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 04:45:57.610896 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 04:45:57.610911 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 04:45:57.610924 | orchestrator | ok: [testbed-node-1 -> {{ groups[mon_group_name][0] }}] 2026-03-31 04:45:57.610937 | orchestrator | 2026-03-31 04:45:57.610951 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-31 04:45:57.610965 | orchestrator | Tuesday 31 March 2026 04:45:54 +0000 (0:00:06.350) 0:11:27.610 ********* 2026-03-31 04:45:57.610978 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:45:57.610991 | orchestrator | 2026-03-31 04:45:57.611005 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-31 04:45:57.611020 | orchestrator | Tuesday 31 March 2026 04:45:55 +0000 (0:00:00.496) 0:11:28.106 ********* 2026-03-31 04:45:57.611033 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-31 04:45:57.611047 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-31 04:45:57.611060 | orchestrator | 2026-03-31 04:45:57.611086 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-31 04:46:16.766184 | orchestrator | Tuesday 31 March 2026 04:45:57 +0000 (0:00:02.167) 0:11:30.273 ********* 2026-03-31 04:46:16.766326 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-31 04:46:16.766343 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-31 04:46:16.766354 | orchestrator | 2026-03-31 04:46:16.766366 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-31 04:46:16.766376 | orchestrator | Tuesday 31 March 2026 04:45:58 +0000 (0:00:01.006) 0:11:31.280 ********* 2026-03-31 04:46:16.766412 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:46:16.766423 | orchestrator | 2026-03-31 04:46:16.766433 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-31 04:46:16.766443 | orchestrator | Tuesday 31 March 2026 04:45:59 +0000 (0:00:00.506) 0:11:31.786 ********* 2026-03-31 04:46:16.766453 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:46:16.766463 | orchestrator | 2026-03-31 04:46:16.766473 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-31 04:46:16.766483 | orchestrator | Tuesday 31 March 2026 04:45:59 +0000 (0:00:00.132) 0:11:31.919 ********* 2026-03-31 04:46:16.766493 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:46:16.766503 | orchestrator | 2026-03-31 04:46:16.766512 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-31 04:46:16.766522 | orchestrator | Tuesday 31 March 2026 04:45:59 +0000 (0:00:00.144) 0:11:32.064 ********* 2026-03-31 04:46:16.766532 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-1 2026-03-31 04:46:16.766543 | orchestrator | 2026-03-31 04:46:16.766552 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-31 04:46:16.766562 | orchestrator | Tuesday 31 March 2026 04:45:59 +0000 (0:00:00.228) 0:11:32.292 ********* 2026-03-31 04:46:16.766572 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:46:16.766582 | orchestrator | 2026-03-31 04:46:16.766592 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-31 04:46:16.766602 | orchestrator | Tuesday 31 March 2026 04:45:59 +0000 (0:00:00.144) 0:11:32.437 ********* 2026-03-31 04:46:16.766611 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:46:16.766621 | orchestrator | 2026-03-31 04:46:16.766631 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-31 04:46:16.766641 | orchestrator | Tuesday 31 March 2026 04:45:59 +0000 (0:00:00.157) 0:11:32.595 ********* 2026-03-31 04:46:16.766651 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-1 2026-03-31 04:46:16.766661 | orchestrator | 2026-03-31 04:46:16.766670 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-31 04:46:16.766680 | orchestrator | Tuesday 31 March 2026 04:46:00 +0000 (0:00:00.224) 0:11:32.819 ********* 2026-03-31 04:46:16.766690 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:46:16.766700 | orchestrator | 2026-03-31 04:46:16.766710 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-31 04:46:16.766719 | orchestrator | Tuesday 31 March 2026 04:46:01 +0000 (0:00:01.043) 0:11:33.863 ********* 2026-03-31 04:46:16.766729 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:46:16.766739 | orchestrator | 2026-03-31 04:46:16.766749 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-31 04:46:16.766759 | orchestrator | Tuesday 31 March 2026 04:46:02 +0000 (0:00:01.178) 0:11:35.041 ********* 2026-03-31 04:46:16.766768 | orchestrator | ok: [testbed-node-1] 2026-03-31 04:46:16.766778 | orchestrator | 2026-03-31 04:46:16.766788 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-31 04:46:16.766798 | orchestrator | Tuesday 31 March 2026 04:46:03 +0000 (0:00:01.376) 0:11:36.417 ********* 2026-03-31 04:46:16.766808 | orchestrator | changed: [testbed-node-1] 2026-03-31 04:46:16.766818 | orchestrator | 2026-03-31 04:46:16.766828 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-31 04:46:16.766838 | orchestrator | Tuesday 31 March 2026 04:46:06 +0000 (0:00:02.750) 0:11:39.168 ********* 2026-03-31 04:46:16.766848 | orchestrator | skipping: [testbed-node-1] 2026-03-31 04:46:16.766857 | orchestrator | 2026-03-31 04:46:16.766881 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-03-31 04:46:16.766891 | orchestrator | 2026-03-31 04:46:16.766921 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-03-31 04:46:16.766931 | orchestrator | Tuesday 31 March 2026 04:46:06 +0000 (0:00:00.225) 0:11:39.394 ********* 2026-03-31 04:46:16.766940 | orchestrator | changed: [testbed-node-2] 2026-03-31 04:46:16.766958 | orchestrator | 2026-03-31 04:46:16.766968 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-03-31 04:46:16.766977 | orchestrator | Tuesday 31 March 2026 04:46:08 +0000 (0:00:01.881) 0:11:41.275 ********* 2026-03-31 04:46:16.766987 | orchestrator | changed: [testbed-node-2] 2026-03-31 04:46:16.766997 | orchestrator | 2026-03-31 04:46:16.767007 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-31 04:46:16.767016 | orchestrator | Tuesday 31 March 2026 04:46:10 +0000 (0:00:01.555) 0:11:42.830 ********* 2026-03-31 04:46:16.767026 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-03-31 04:46:16.767035 | orchestrator | 2026-03-31 04:46:16.767045 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-31 04:46:16.767055 | orchestrator | Tuesday 31 March 2026 04:46:10 +0000 (0:00:00.247) 0:11:43.077 ********* 2026-03-31 04:46:16.767064 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:46:16.767074 | orchestrator | 2026-03-31 04:46:16.767084 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-31 04:46:16.767093 | orchestrator | Tuesday 31 March 2026 04:46:10 +0000 (0:00:00.509) 0:11:43.587 ********* 2026-03-31 04:46:16.767103 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:46:16.767112 | orchestrator | 2026-03-31 04:46:16.767122 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-31 04:46:16.767132 | orchestrator | Tuesday 31 March 2026 04:46:11 +0000 (0:00:00.114) 0:11:43.701 ********* 2026-03-31 04:46:16.767141 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:46:16.767151 | orchestrator | 2026-03-31 04:46:16.767161 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-31 04:46:16.767187 | orchestrator | Tuesday 31 March 2026 04:46:11 +0000 (0:00:00.473) 0:11:44.175 ********* 2026-03-31 04:46:16.767198 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:46:16.767208 | orchestrator | 2026-03-31 04:46:16.767218 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-31 04:46:16.767227 | orchestrator | Tuesday 31 March 2026 04:46:11 +0000 (0:00:00.409) 0:11:44.584 ********* 2026-03-31 04:46:16.767237 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:46:16.767247 | orchestrator | 2026-03-31 04:46:16.767257 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-31 04:46:16.767266 | orchestrator | Tuesday 31 March 2026 04:46:12 +0000 (0:00:00.150) 0:11:44.735 ********* 2026-03-31 04:46:16.767276 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:46:16.767286 | orchestrator | 2026-03-31 04:46:16.767296 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-31 04:46:16.767305 | orchestrator | Tuesday 31 March 2026 04:46:12 +0000 (0:00:00.146) 0:11:44.882 ********* 2026-03-31 04:46:16.767315 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:16.767325 | orchestrator | 2026-03-31 04:46:16.767335 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-31 04:46:16.767345 | orchestrator | Tuesday 31 March 2026 04:46:12 +0000 (0:00:00.160) 0:11:45.043 ********* 2026-03-31 04:46:16.767354 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:46:16.767364 | orchestrator | 2026-03-31 04:46:16.767374 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-31 04:46:16.767383 | orchestrator | Tuesday 31 March 2026 04:46:12 +0000 (0:00:00.165) 0:11:45.208 ********* 2026-03-31 04:46:16.767469 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:46:16.767492 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:46:16.767502 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-31 04:46:16.767512 | orchestrator | 2026-03-31 04:46:16.767521 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-31 04:46:16.767531 | orchestrator | Tuesday 31 March 2026 04:46:13 +0000 (0:00:00.715) 0:11:45.923 ********* 2026-03-31 04:46:16.767541 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:46:16.767559 | orchestrator | 2026-03-31 04:46:16.767569 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-31 04:46:16.767578 | orchestrator | Tuesday 31 March 2026 04:46:13 +0000 (0:00:00.253) 0:11:46.177 ********* 2026-03-31 04:46:16.767588 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:46:16.767598 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:46:16.767608 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-31 04:46:16.767618 | orchestrator | 2026-03-31 04:46:16.767627 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-31 04:46:16.767637 | orchestrator | Tuesday 31 March 2026 04:46:15 +0000 (0:00:01.748) 0:11:47.925 ********* 2026-03-31 04:46:16.767647 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-31 04:46:16.767658 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-31 04:46:16.767668 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-31 04:46:16.767677 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:16.767687 | orchestrator | 2026-03-31 04:46:16.767697 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-31 04:46:16.767707 | orchestrator | Tuesday 31 March 2026 04:46:15 +0000 (0:00:00.441) 0:11:48.366 ********* 2026-03-31 04:46:16.767718 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-31 04:46:16.767737 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-31 04:46:16.767747 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-31 04:46:16.767758 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:16.767768 | orchestrator | 2026-03-31 04:46:16.767778 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-31 04:46:16.767787 | orchestrator | Tuesday 31 March 2026 04:46:16 +0000 (0:00:00.901) 0:11:49.268 ********* 2026-03-31 04:46:16.767799 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 04:46:16.767821 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 04:46:21.186438 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 04:46:21.186559 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:21.186577 | orchestrator | 2026-03-31 04:46:21.186591 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-31 04:46:21.186629 | orchestrator | Tuesday 31 March 2026 04:46:16 +0000 (0:00:00.167) 0:11:49.436 ********* 2026-03-31 04:46:21.186644 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '2a470704af4f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-31 04:46:13.987602', 'end': '2026-03-31 04:46:14.031794', 'delta': '0:00:00.044192', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2a470704af4f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-31 04:46:21.186659 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '72281537ffe8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-31 04:46:14.536911', 'end': '2026-03-31 04:46:14.570510', 'delta': '0:00:00.033599', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['72281537ffe8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-31 04:46:21.186685 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '4f3969f3506a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-31 04:46:15.047322', 'end': '2026-03-31 04:46:15.091668', 'delta': '0:00:00.044346', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['4f3969f3506a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-31 04:46:21.186698 | orchestrator | 2026-03-31 04:46:21.186709 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-31 04:46:21.186721 | orchestrator | Tuesday 31 March 2026 04:46:16 +0000 (0:00:00.190) 0:11:49.626 ********* 2026-03-31 04:46:21.186732 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:46:21.186744 | orchestrator | 2026-03-31 04:46:21.186755 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-31 04:46:21.186767 | orchestrator | Tuesday 31 March 2026 04:46:17 +0000 (0:00:00.250) 0:11:49.877 ********* 2026-03-31 04:46:21.186778 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:21.186789 | orchestrator | 2026-03-31 04:46:21.186800 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-31 04:46:21.186811 | orchestrator | Tuesday 31 March 2026 04:46:18 +0000 (0:00:00.936) 0:11:50.813 ********* 2026-03-31 04:46:21.186822 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:46:21.186834 | orchestrator | 2026-03-31 04:46:21.186845 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-31 04:46:21.186856 | orchestrator | Tuesday 31 March 2026 04:46:18 +0000 (0:00:00.145) 0:11:50.959 ********* 2026-03-31 04:46:21.186867 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-31 04:46:21.186878 | orchestrator | 2026-03-31 04:46:21.186890 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-31 04:46:21.186929 | orchestrator | Tuesday 31 March 2026 04:46:19 +0000 (0:00:00.942) 0:11:51.902 ********* 2026-03-31 04:46:21.186941 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:46:21.186952 | orchestrator | 2026-03-31 04:46:21.186983 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-31 04:46:21.186998 | orchestrator | Tuesday 31 March 2026 04:46:19 +0000 (0:00:00.148) 0:11:52.050 ********* 2026-03-31 04:46:21.187031 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:21.187045 | orchestrator | 2026-03-31 04:46:21.187057 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-31 04:46:21.187070 | orchestrator | Tuesday 31 March 2026 04:46:19 +0000 (0:00:00.147) 0:11:52.198 ********* 2026-03-31 04:46:21.187084 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:21.187095 | orchestrator | 2026-03-31 04:46:21.187106 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-31 04:46:21.187117 | orchestrator | Tuesday 31 March 2026 04:46:19 +0000 (0:00:00.238) 0:11:52.437 ********* 2026-03-31 04:46:21.187128 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:21.187140 | orchestrator | 2026-03-31 04:46:21.187151 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-31 04:46:21.187162 | orchestrator | Tuesday 31 March 2026 04:46:19 +0000 (0:00:00.114) 0:11:52.552 ********* 2026-03-31 04:46:21.187173 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:21.187184 | orchestrator | 2026-03-31 04:46:21.187195 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-31 04:46:21.187206 | orchestrator | Tuesday 31 March 2026 04:46:19 +0000 (0:00:00.122) 0:11:52.674 ********* 2026-03-31 04:46:21.187218 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:21.187229 | orchestrator | 2026-03-31 04:46:21.187240 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-31 04:46:21.187251 | orchestrator | Tuesday 31 March 2026 04:46:20 +0000 (0:00:00.140) 0:11:52.815 ********* 2026-03-31 04:46:21.187262 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:21.187273 | orchestrator | 2026-03-31 04:46:21.187284 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-31 04:46:21.187295 | orchestrator | Tuesday 31 March 2026 04:46:20 +0000 (0:00:00.119) 0:11:52.935 ********* 2026-03-31 04:46:21.187306 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:21.187317 | orchestrator | 2026-03-31 04:46:21.187329 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-31 04:46:21.187340 | orchestrator | Tuesday 31 March 2026 04:46:20 +0000 (0:00:00.140) 0:11:53.075 ********* 2026-03-31 04:46:21.187351 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:21.187362 | orchestrator | 2026-03-31 04:46:21.187373 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-31 04:46:21.187385 | orchestrator | Tuesday 31 March 2026 04:46:20 +0000 (0:00:00.126) 0:11:53.202 ********* 2026-03-31 04:46:21.187396 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:21.187407 | orchestrator | 2026-03-31 04:46:21.187418 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-31 04:46:21.187429 | orchestrator | Tuesday 31 March 2026 04:46:20 +0000 (0:00:00.128) 0:11:53.331 ********* 2026-03-31 04:46:21.187441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:46:21.187453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:46:21.187471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:46:21.187491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-31-01-38-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-31 04:46:21.187504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:46:21.187523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:46:21.399723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:46:21.399850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4', 'scsi-SQEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '49050c5a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part16', 'scsi-SQEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part14', 'scsi-SQEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part15', 'scsi-SQEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part1', 'scsi-SQEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-31 04:46:21.399896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:46:21.399946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:46:21.399961 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:21.399975 | orchestrator | 2026-03-31 04:46:21.399987 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-31 04:46:21.399999 | orchestrator | Tuesday 31 March 2026 04:46:21 +0000 (0:00:00.524) 0:11:53.856 ********* 2026-03-31 04:46:21.400033 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:46:21.400048 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:46:21.400061 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:46:21.400082 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-31-01-38-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:46:21.400123 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:46:21.400144 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:46:21.400162 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:46:21.400198 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4', 'scsi-SQEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '49050c5a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part16', 'scsi-SQEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part14', 'scsi-SQEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part15', 'scsi-SQEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part1', 'scsi-SQEMU_QEMU_HARDDISK_49050c5a-8b56-4e13-a731-86d499e8d1b4-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:46:31.547635 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:46:31.547753 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:46:31.547771 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:31.547785 | orchestrator | 2026-03-31 04:46:31.547798 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-31 04:46:31.547811 | orchestrator | Tuesday 31 March 2026 04:46:21 +0000 (0:00:00.215) 0:11:54.071 ********* 2026-03-31 04:46:31.547822 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:46:31.547835 | orchestrator | 2026-03-31 04:46:31.547846 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-31 04:46:31.547857 | orchestrator | Tuesday 31 March 2026 04:46:21 +0000 (0:00:00.519) 0:11:54.591 ********* 2026-03-31 04:46:31.547868 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:46:31.547879 | orchestrator | 2026-03-31 04:46:31.547891 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-31 04:46:31.547902 | orchestrator | Tuesday 31 March 2026 04:46:22 +0000 (0:00:00.122) 0:11:54.713 ********* 2026-03-31 04:46:31.547913 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:46:31.547965 | orchestrator | 2026-03-31 04:46:31.547977 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-31 04:46:31.547988 | orchestrator | Tuesday 31 March 2026 04:46:22 +0000 (0:00:00.481) 0:11:55.195 ********* 2026-03-31 04:46:31.547999 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:31.548010 | orchestrator | 2026-03-31 04:46:31.548021 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-31 04:46:31.548032 | orchestrator | Tuesday 31 March 2026 04:46:22 +0000 (0:00:00.133) 0:11:55.328 ********* 2026-03-31 04:46:31.548043 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:31.548054 | orchestrator | 2026-03-31 04:46:31.548065 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-31 04:46:31.548077 | orchestrator | Tuesday 31 March 2026 04:46:22 +0000 (0:00:00.235) 0:11:55.564 ********* 2026-03-31 04:46:31.548091 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:31.548103 | orchestrator | 2026-03-31 04:46:31.548116 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-31 04:46:31.548128 | orchestrator | Tuesday 31 March 2026 04:46:23 +0000 (0:00:00.140) 0:11:55.705 ********* 2026-03-31 04:46:31.548141 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-31 04:46:31.548154 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-31 04:46:31.548186 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-31 04:46:31.548199 | orchestrator | 2026-03-31 04:46:31.548211 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-31 04:46:31.548224 | orchestrator | Tuesday 31 March 2026 04:46:23 +0000 (0:00:00.927) 0:11:56.633 ********* 2026-03-31 04:46:31.548236 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-31 04:46:31.548250 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-31 04:46:31.548263 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-31 04:46:31.548275 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:31.548287 | orchestrator | 2026-03-31 04:46:31.548299 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-31 04:46:31.548312 | orchestrator | Tuesday 31 March 2026 04:46:24 +0000 (0:00:00.176) 0:11:56.809 ********* 2026-03-31 04:46:31.548324 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:31.548337 | orchestrator | 2026-03-31 04:46:31.548348 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-31 04:46:31.548361 | orchestrator | Tuesday 31 March 2026 04:46:24 +0000 (0:00:00.138) 0:11:56.948 ********* 2026-03-31 04:46:31.548373 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:46:31.548385 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:46:31.548398 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-31 04:46:31.548410 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-31 04:46:31.548423 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-31 04:46:31.548435 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-31 04:46:31.548471 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-31 04:46:31.548483 | orchestrator | 2026-03-31 04:46:31.548494 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-31 04:46:31.548505 | orchestrator | Tuesday 31 March 2026 04:46:25 +0000 (0:00:01.130) 0:11:58.078 ********* 2026-03-31 04:46:31.548516 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:46:31.548527 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:46:31.548538 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-31 04:46:31.548549 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-31 04:46:31.548560 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-31 04:46:31.548571 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-31 04:46:31.548582 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-31 04:46:31.548593 | orchestrator | 2026-03-31 04:46:31.548604 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-31 04:46:31.548614 | orchestrator | Tuesday 31 March 2026 04:46:27 +0000 (0:00:01.619) 0:11:59.698 ********* 2026-03-31 04:46:31.548625 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-03-31 04:46:31.548637 | orchestrator | 2026-03-31 04:46:31.548649 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-31 04:46:31.548659 | orchestrator | Tuesday 31 March 2026 04:46:27 +0000 (0:00:00.502) 0:12:00.200 ********* 2026-03-31 04:46:31.548670 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-03-31 04:46:31.548681 | orchestrator | 2026-03-31 04:46:31.548692 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-31 04:46:31.548703 | orchestrator | Tuesday 31 March 2026 04:46:27 +0000 (0:00:00.208) 0:12:00.409 ********* 2026-03-31 04:46:31.548724 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:46:31.548736 | orchestrator | 2026-03-31 04:46:31.548747 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-31 04:46:31.548758 | orchestrator | Tuesday 31 March 2026 04:46:28 +0000 (0:00:00.615) 0:12:01.024 ********* 2026-03-31 04:46:31.548769 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:31.548780 | orchestrator | 2026-03-31 04:46:31.548791 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-31 04:46:31.548802 | orchestrator | Tuesday 31 March 2026 04:46:28 +0000 (0:00:00.150) 0:12:01.174 ********* 2026-03-31 04:46:31.548813 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:31.548824 | orchestrator | 2026-03-31 04:46:31.548835 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-31 04:46:31.548845 | orchestrator | Tuesday 31 March 2026 04:46:28 +0000 (0:00:00.132) 0:12:01.307 ********* 2026-03-31 04:46:31.548856 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:31.548867 | orchestrator | 2026-03-31 04:46:31.548878 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-31 04:46:31.548889 | orchestrator | Tuesday 31 March 2026 04:46:28 +0000 (0:00:00.128) 0:12:01.435 ********* 2026-03-31 04:46:31.548900 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:46:31.548911 | orchestrator | 2026-03-31 04:46:31.548939 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-31 04:46:31.548951 | orchestrator | Tuesday 31 March 2026 04:46:29 +0000 (0:00:00.487) 0:12:01.923 ********* 2026-03-31 04:46:31.548962 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:31.548973 | orchestrator | 2026-03-31 04:46:31.548984 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-31 04:46:31.548994 | orchestrator | Tuesday 31 March 2026 04:46:29 +0000 (0:00:00.140) 0:12:02.063 ********* 2026-03-31 04:46:31.549005 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:31.549016 | orchestrator | 2026-03-31 04:46:31.549027 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-31 04:46:31.549038 | orchestrator | Tuesday 31 March 2026 04:46:29 +0000 (0:00:00.150) 0:12:02.214 ********* 2026-03-31 04:46:31.549049 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:46:31.549060 | orchestrator | 2026-03-31 04:46:31.549071 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-31 04:46:31.549081 | orchestrator | Tuesday 31 March 2026 04:46:30 +0000 (0:00:00.545) 0:12:02.759 ********* 2026-03-31 04:46:31.549092 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:46:31.549103 | orchestrator | 2026-03-31 04:46:31.549114 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-31 04:46:31.549125 | orchestrator | Tuesday 31 March 2026 04:46:30 +0000 (0:00:00.481) 0:12:03.241 ********* 2026-03-31 04:46:31.549136 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:31.549147 | orchestrator | 2026-03-31 04:46:31.549158 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-31 04:46:31.549169 | orchestrator | Tuesday 31 March 2026 04:46:30 +0000 (0:00:00.397) 0:12:03.638 ********* 2026-03-31 04:46:31.549180 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:46:31.549191 | orchestrator | 2026-03-31 04:46:31.549202 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-31 04:46:31.549212 | orchestrator | Tuesday 31 March 2026 04:46:31 +0000 (0:00:00.170) 0:12:03.809 ********* 2026-03-31 04:46:31.549223 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:31.549234 | orchestrator | 2026-03-31 04:46:31.549245 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-31 04:46:31.549256 | orchestrator | Tuesday 31 March 2026 04:46:31 +0000 (0:00:00.144) 0:12:03.954 ********* 2026-03-31 04:46:31.549266 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:31.549277 | orchestrator | 2026-03-31 04:46:31.549288 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-31 04:46:31.549299 | orchestrator | Tuesday 31 March 2026 04:46:31 +0000 (0:00:00.134) 0:12:04.089 ********* 2026-03-31 04:46:31.549329 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:43.349471 | orchestrator | 2026-03-31 04:46:43.349590 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-31 04:46:43.349608 | orchestrator | Tuesday 31 March 2026 04:46:31 +0000 (0:00:00.133) 0:12:04.223 ********* 2026-03-31 04:46:43.349621 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:43.349633 | orchestrator | 2026-03-31 04:46:43.349644 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-31 04:46:43.349656 | orchestrator | Tuesday 31 March 2026 04:46:31 +0000 (0:00:00.150) 0:12:04.373 ********* 2026-03-31 04:46:43.349667 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:43.349678 | orchestrator | 2026-03-31 04:46:43.349689 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-31 04:46:43.349701 | orchestrator | Tuesday 31 March 2026 04:46:31 +0000 (0:00:00.134) 0:12:04.507 ********* 2026-03-31 04:46:43.349712 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:46:43.349724 | orchestrator | 2026-03-31 04:46:43.349735 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-31 04:46:43.349746 | orchestrator | Tuesday 31 March 2026 04:46:31 +0000 (0:00:00.147) 0:12:04.655 ********* 2026-03-31 04:46:43.349757 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:46:43.349768 | orchestrator | 2026-03-31 04:46:43.349779 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-31 04:46:43.349790 | orchestrator | Tuesday 31 March 2026 04:46:32 +0000 (0:00:00.157) 0:12:04.812 ********* 2026-03-31 04:46:43.349801 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:46:43.349812 | orchestrator | 2026-03-31 04:46:43.349823 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-31 04:46:43.349834 | orchestrator | Tuesday 31 March 2026 04:46:32 +0000 (0:00:00.216) 0:12:05.029 ********* 2026-03-31 04:46:43.349845 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:43.349856 | orchestrator | 2026-03-31 04:46:43.349867 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-31 04:46:43.349878 | orchestrator | Tuesday 31 March 2026 04:46:32 +0000 (0:00:00.151) 0:12:05.180 ********* 2026-03-31 04:46:43.349889 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:43.349900 | orchestrator | 2026-03-31 04:46:43.349911 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-31 04:46:43.349922 | orchestrator | Tuesday 31 March 2026 04:46:32 +0000 (0:00:00.131) 0:12:05.311 ********* 2026-03-31 04:46:43.349933 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:43.349966 | orchestrator | 2026-03-31 04:46:43.349981 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-31 04:46:43.349994 | orchestrator | Tuesday 31 March 2026 04:46:33 +0000 (0:00:00.413) 0:12:05.725 ********* 2026-03-31 04:46:43.350007 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:43.350074 | orchestrator | 2026-03-31 04:46:43.350088 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-31 04:46:43.350101 | orchestrator | Tuesday 31 March 2026 04:46:33 +0000 (0:00:00.147) 0:12:05.872 ********* 2026-03-31 04:46:43.350114 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:43.350126 | orchestrator | 2026-03-31 04:46:43.350138 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-31 04:46:43.350151 | orchestrator | Tuesday 31 March 2026 04:46:33 +0000 (0:00:00.115) 0:12:05.988 ********* 2026-03-31 04:46:43.350164 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:43.350176 | orchestrator | 2026-03-31 04:46:43.350189 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-31 04:46:43.350202 | orchestrator | Tuesday 31 March 2026 04:46:33 +0000 (0:00:00.133) 0:12:06.121 ********* 2026-03-31 04:46:43.350213 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:43.350224 | orchestrator | 2026-03-31 04:46:43.350235 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-31 04:46:43.350247 | orchestrator | Tuesday 31 March 2026 04:46:33 +0000 (0:00:00.129) 0:12:06.251 ********* 2026-03-31 04:46:43.350281 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:43.350292 | orchestrator | 2026-03-31 04:46:43.350304 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-31 04:46:43.350314 | orchestrator | Tuesday 31 March 2026 04:46:33 +0000 (0:00:00.129) 0:12:06.380 ********* 2026-03-31 04:46:43.350325 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:43.350336 | orchestrator | 2026-03-31 04:46:43.350347 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-31 04:46:43.350358 | orchestrator | Tuesday 31 March 2026 04:46:33 +0000 (0:00:00.131) 0:12:06.511 ********* 2026-03-31 04:46:43.350369 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:43.350380 | orchestrator | 2026-03-31 04:46:43.350391 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-31 04:46:43.350402 | orchestrator | Tuesday 31 March 2026 04:46:33 +0000 (0:00:00.129) 0:12:06.641 ********* 2026-03-31 04:46:43.350413 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:43.350424 | orchestrator | 2026-03-31 04:46:43.350435 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-31 04:46:43.350446 | orchestrator | Tuesday 31 March 2026 04:46:34 +0000 (0:00:00.130) 0:12:06.771 ********* 2026-03-31 04:46:43.350457 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:43.350468 | orchestrator | 2026-03-31 04:46:43.350479 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-31 04:46:43.350490 | orchestrator | Tuesday 31 March 2026 04:46:34 +0000 (0:00:00.191) 0:12:06.963 ********* 2026-03-31 04:46:43.350501 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:46:43.350512 | orchestrator | 2026-03-31 04:46:43.350523 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-31 04:46:43.350534 | orchestrator | Tuesday 31 March 2026 04:46:35 +0000 (0:00:00.915) 0:12:07.878 ********* 2026-03-31 04:46:43.350545 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:46:43.350556 | orchestrator | 2026-03-31 04:46:43.350567 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-31 04:46:43.350578 | orchestrator | Tuesday 31 March 2026 04:46:36 +0000 (0:00:01.450) 0:12:09.328 ********* 2026-03-31 04:46:43.350589 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-03-31 04:46:43.350601 | orchestrator | 2026-03-31 04:46:43.350646 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-31 04:46:43.350658 | orchestrator | Tuesday 31 March 2026 04:46:37 +0000 (0:00:00.497) 0:12:09.826 ********* 2026-03-31 04:46:43.350669 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:43.350680 | orchestrator | 2026-03-31 04:46:43.350691 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-31 04:46:43.350702 | orchestrator | Tuesday 31 March 2026 04:46:37 +0000 (0:00:00.124) 0:12:09.950 ********* 2026-03-31 04:46:43.350713 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:43.350724 | orchestrator | 2026-03-31 04:46:43.350735 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-31 04:46:43.350746 | orchestrator | Tuesday 31 March 2026 04:46:37 +0000 (0:00:00.131) 0:12:10.081 ********* 2026-03-31 04:46:43.350757 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-31 04:46:43.350768 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-31 04:46:43.350779 | orchestrator | 2026-03-31 04:46:43.350790 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-31 04:46:43.350801 | orchestrator | Tuesday 31 March 2026 04:46:38 +0000 (0:00:00.815) 0:12:10.897 ********* 2026-03-31 04:46:43.350812 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:46:43.350823 | orchestrator | 2026-03-31 04:46:43.350834 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-31 04:46:43.350845 | orchestrator | Tuesday 31 March 2026 04:46:38 +0000 (0:00:00.490) 0:12:11.388 ********* 2026-03-31 04:46:43.350864 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:43.350875 | orchestrator | 2026-03-31 04:46:43.350886 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-31 04:46:43.350897 | orchestrator | Tuesday 31 March 2026 04:46:38 +0000 (0:00:00.165) 0:12:11.553 ********* 2026-03-31 04:46:43.350908 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:43.350919 | orchestrator | 2026-03-31 04:46:43.350930 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-31 04:46:43.350980 | orchestrator | Tuesday 31 March 2026 04:46:39 +0000 (0:00:00.135) 0:12:11.689 ********* 2026-03-31 04:46:43.350993 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:43.351004 | orchestrator | 2026-03-31 04:46:43.351015 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-31 04:46:43.351026 | orchestrator | Tuesday 31 March 2026 04:46:39 +0000 (0:00:00.128) 0:12:11.818 ********* 2026-03-31 04:46:43.351037 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-03-31 04:46:43.351049 | orchestrator | 2026-03-31 04:46:43.351059 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-31 04:46:43.351070 | orchestrator | Tuesday 31 March 2026 04:46:39 +0000 (0:00:00.204) 0:12:12.023 ********* 2026-03-31 04:46:43.351081 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:46:43.351092 | orchestrator | 2026-03-31 04:46:43.351103 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-31 04:46:43.351114 | orchestrator | Tuesday 31 March 2026 04:46:40 +0000 (0:00:00.692) 0:12:12.715 ********* 2026-03-31 04:46:43.351125 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-31 04:46:43.351136 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-31 04:46:43.351147 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-31 04:46:43.351158 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:43.351169 | orchestrator | 2026-03-31 04:46:43.351180 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-31 04:46:43.351190 | orchestrator | Tuesday 31 March 2026 04:46:40 +0000 (0:00:00.158) 0:12:12.873 ********* 2026-03-31 04:46:43.351201 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:43.351212 | orchestrator | 2026-03-31 04:46:43.351223 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-31 04:46:43.351234 | orchestrator | Tuesday 31 March 2026 04:46:40 +0000 (0:00:00.119) 0:12:12.993 ********* 2026-03-31 04:46:43.351245 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:43.351256 | orchestrator | 2026-03-31 04:46:43.351267 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-31 04:46:43.351277 | orchestrator | Tuesday 31 March 2026 04:46:40 +0000 (0:00:00.447) 0:12:13.440 ********* 2026-03-31 04:46:43.351288 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:43.351299 | orchestrator | 2026-03-31 04:46:43.351310 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-31 04:46:43.351321 | orchestrator | Tuesday 31 March 2026 04:46:40 +0000 (0:00:00.136) 0:12:13.576 ********* 2026-03-31 04:46:43.351332 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:43.351343 | orchestrator | 2026-03-31 04:46:43.351353 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-31 04:46:43.351364 | orchestrator | Tuesday 31 March 2026 04:46:41 +0000 (0:00:00.174) 0:12:13.750 ********* 2026-03-31 04:46:43.351375 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:43.351386 | orchestrator | 2026-03-31 04:46:43.351397 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-31 04:46:43.351408 | orchestrator | Tuesday 31 March 2026 04:46:41 +0000 (0:00:00.151) 0:12:13.901 ********* 2026-03-31 04:46:43.351419 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:46:43.351430 | orchestrator | 2026-03-31 04:46:43.351441 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-31 04:46:43.351458 | orchestrator | Tuesday 31 March 2026 04:46:42 +0000 (0:00:01.708) 0:12:15.610 ********* 2026-03-31 04:46:43.351470 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:46:43.351481 | orchestrator | 2026-03-31 04:46:43.351492 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-31 04:46:43.351503 | orchestrator | Tuesday 31 March 2026 04:46:43 +0000 (0:00:00.163) 0:12:15.773 ********* 2026-03-31 04:46:43.351519 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-03-31 04:46:43.351530 | orchestrator | 2026-03-31 04:46:43.351549 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-31 04:46:55.602386 | orchestrator | Tuesday 31 March 2026 04:46:43 +0000 (0:00:00.244) 0:12:16.017 ********* 2026-03-31 04:46:55.602503 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:55.602520 | orchestrator | 2026-03-31 04:46:55.602533 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-31 04:46:55.602545 | orchestrator | Tuesday 31 March 2026 04:46:43 +0000 (0:00:00.145) 0:12:16.163 ********* 2026-03-31 04:46:55.602556 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:55.602568 | orchestrator | 2026-03-31 04:46:55.602579 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-31 04:46:55.602591 | orchestrator | Tuesday 31 March 2026 04:46:43 +0000 (0:00:00.149) 0:12:16.312 ********* 2026-03-31 04:46:55.602602 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:55.602613 | orchestrator | 2026-03-31 04:46:55.602624 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-31 04:46:55.602635 | orchestrator | Tuesday 31 March 2026 04:46:43 +0000 (0:00:00.149) 0:12:16.461 ********* 2026-03-31 04:46:55.602646 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:55.602658 | orchestrator | 2026-03-31 04:46:55.602669 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-31 04:46:55.602680 | orchestrator | Tuesday 31 March 2026 04:46:43 +0000 (0:00:00.151) 0:12:16.612 ********* 2026-03-31 04:46:55.602691 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:55.602702 | orchestrator | 2026-03-31 04:46:55.602713 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-31 04:46:55.602724 | orchestrator | Tuesday 31 March 2026 04:46:44 +0000 (0:00:00.138) 0:12:16.751 ********* 2026-03-31 04:46:55.602735 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:55.602746 | orchestrator | 2026-03-31 04:46:55.602758 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-31 04:46:55.602769 | orchestrator | Tuesday 31 March 2026 04:46:44 +0000 (0:00:00.421) 0:12:17.173 ********* 2026-03-31 04:46:55.602780 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:55.602791 | orchestrator | 2026-03-31 04:46:55.602802 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-31 04:46:55.602813 | orchestrator | Tuesday 31 March 2026 04:46:44 +0000 (0:00:00.148) 0:12:17.321 ********* 2026-03-31 04:46:55.602824 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:55.602836 | orchestrator | 2026-03-31 04:46:55.602847 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-31 04:46:55.602858 | orchestrator | Tuesday 31 March 2026 04:46:44 +0000 (0:00:00.150) 0:12:17.472 ********* 2026-03-31 04:46:55.602869 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:46:55.602881 | orchestrator | 2026-03-31 04:46:55.602892 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-31 04:46:55.602904 | orchestrator | Tuesday 31 March 2026 04:46:45 +0000 (0:00:00.220) 0:12:17.693 ********* 2026-03-31 04:46:55.602915 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-03-31 04:46:55.602929 | orchestrator | 2026-03-31 04:46:55.602941 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-31 04:46:55.602980 | orchestrator | Tuesday 31 March 2026 04:46:45 +0000 (0:00:00.220) 0:12:17.913 ********* 2026-03-31 04:46:55.602994 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-03-31 04:46:55.603030 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-31 04:46:55.603044 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-31 04:46:55.603057 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-31 04:46:55.603069 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-31 04:46:55.603082 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-31 04:46:55.603094 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-31 04:46:55.603107 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-31 04:46:55.603119 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-31 04:46:55.603132 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-31 04:46:55.603144 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-31 04:46:55.603157 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-31 04:46:55.603170 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-31 04:46:55.603182 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-31 04:46:55.603194 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-03-31 04:46:55.603205 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-03-31 04:46:55.603216 | orchestrator | 2026-03-31 04:46:55.603228 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-31 04:46:55.603239 | orchestrator | Tuesday 31 March 2026 04:46:51 +0000 (0:00:05.794) 0:12:23.708 ********* 2026-03-31 04:46:55.603250 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:55.603261 | orchestrator | 2026-03-31 04:46:55.603272 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-31 04:46:55.603284 | orchestrator | Tuesday 31 March 2026 04:46:51 +0000 (0:00:00.126) 0:12:23.834 ********* 2026-03-31 04:46:55.603295 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:55.603306 | orchestrator | 2026-03-31 04:46:55.603317 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-31 04:46:55.603328 | orchestrator | Tuesday 31 March 2026 04:46:51 +0000 (0:00:00.136) 0:12:23.971 ********* 2026-03-31 04:46:55.603339 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:55.603350 | orchestrator | 2026-03-31 04:46:55.603361 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-31 04:46:55.603373 | orchestrator | Tuesday 31 March 2026 04:46:51 +0000 (0:00:00.126) 0:12:24.097 ********* 2026-03-31 04:46:55.603384 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:55.603395 | orchestrator | 2026-03-31 04:46:55.603420 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-31 04:46:55.603451 | orchestrator | Tuesday 31 March 2026 04:46:51 +0000 (0:00:00.142) 0:12:24.240 ********* 2026-03-31 04:46:55.603464 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:55.603475 | orchestrator | 2026-03-31 04:46:55.603486 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-31 04:46:55.603497 | orchestrator | Tuesday 31 March 2026 04:46:51 +0000 (0:00:00.133) 0:12:24.373 ********* 2026-03-31 04:46:55.603508 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:55.603519 | orchestrator | 2026-03-31 04:46:55.603530 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-31 04:46:55.603541 | orchestrator | Tuesday 31 March 2026 04:46:52 +0000 (0:00:00.404) 0:12:24.778 ********* 2026-03-31 04:46:55.603552 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:55.603563 | orchestrator | 2026-03-31 04:46:55.603574 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-31 04:46:55.603585 | orchestrator | Tuesday 31 March 2026 04:46:52 +0000 (0:00:00.155) 0:12:24.933 ********* 2026-03-31 04:46:55.603596 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:55.603607 | orchestrator | 2026-03-31 04:46:55.603618 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-31 04:46:55.603638 | orchestrator | Tuesday 31 March 2026 04:46:52 +0000 (0:00:00.137) 0:12:25.071 ********* 2026-03-31 04:46:55.603649 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:55.603660 | orchestrator | 2026-03-31 04:46:55.603671 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-31 04:46:55.603682 | orchestrator | Tuesday 31 March 2026 04:46:52 +0000 (0:00:00.138) 0:12:25.209 ********* 2026-03-31 04:46:55.603694 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:55.603705 | orchestrator | 2026-03-31 04:46:55.603716 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-31 04:46:55.603727 | orchestrator | Tuesday 31 March 2026 04:46:52 +0000 (0:00:00.137) 0:12:25.346 ********* 2026-03-31 04:46:55.603738 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:55.603749 | orchestrator | 2026-03-31 04:46:55.603760 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-31 04:46:55.603771 | orchestrator | Tuesday 31 March 2026 04:46:52 +0000 (0:00:00.139) 0:12:25.486 ********* 2026-03-31 04:46:55.603782 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:55.603793 | orchestrator | 2026-03-31 04:46:55.603805 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-31 04:46:55.603816 | orchestrator | Tuesday 31 March 2026 04:46:52 +0000 (0:00:00.136) 0:12:25.622 ********* 2026-03-31 04:46:55.603827 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:55.603838 | orchestrator | 2026-03-31 04:46:55.603849 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-31 04:46:55.603860 | orchestrator | Tuesday 31 March 2026 04:46:53 +0000 (0:00:00.223) 0:12:25.846 ********* 2026-03-31 04:46:55.603871 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:55.603882 | orchestrator | 2026-03-31 04:46:55.603893 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-31 04:46:55.603904 | orchestrator | Tuesday 31 March 2026 04:46:53 +0000 (0:00:00.135) 0:12:25.981 ********* 2026-03-31 04:46:55.603915 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:55.603926 | orchestrator | 2026-03-31 04:46:55.603937 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-31 04:46:55.603948 | orchestrator | Tuesday 31 March 2026 04:46:53 +0000 (0:00:00.248) 0:12:26.230 ********* 2026-03-31 04:46:55.603976 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:55.603988 | orchestrator | 2026-03-31 04:46:55.603999 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-31 04:46:55.604010 | orchestrator | Tuesday 31 March 2026 04:46:53 +0000 (0:00:00.131) 0:12:26.361 ********* 2026-03-31 04:46:55.604020 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:55.604031 | orchestrator | 2026-03-31 04:46:55.604043 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-31 04:46:55.604107 | orchestrator | Tuesday 31 March 2026 04:46:53 +0000 (0:00:00.125) 0:12:26.487 ********* 2026-03-31 04:46:55.604119 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:55.604131 | orchestrator | 2026-03-31 04:46:55.604142 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-31 04:46:55.604153 | orchestrator | Tuesday 31 March 2026 04:46:53 +0000 (0:00:00.128) 0:12:26.616 ********* 2026-03-31 04:46:55.604164 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:55.604175 | orchestrator | 2026-03-31 04:46:55.604186 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-31 04:46:55.604197 | orchestrator | Tuesday 31 March 2026 04:46:54 +0000 (0:00:00.463) 0:12:27.079 ********* 2026-03-31 04:46:55.604208 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:55.604219 | orchestrator | 2026-03-31 04:46:55.604230 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-31 04:46:55.604242 | orchestrator | Tuesday 31 March 2026 04:46:54 +0000 (0:00:00.138) 0:12:27.217 ********* 2026-03-31 04:46:55.604253 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:55.604271 | orchestrator | 2026-03-31 04:46:55.604283 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-31 04:46:55.604294 | orchestrator | Tuesday 31 March 2026 04:46:54 +0000 (0:00:00.146) 0:12:27.363 ********* 2026-03-31 04:46:55.604305 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-31 04:46:55.604316 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-31 04:46:55.604327 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-31 04:46:55.604338 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:46:55.604349 | orchestrator | 2026-03-31 04:46:55.604360 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-31 04:46:55.604371 | orchestrator | Tuesday 31 March 2026 04:46:55 +0000 (0:00:00.426) 0:12:27.790 ********* 2026-03-31 04:46:55.604388 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-31 04:46:55.604406 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-31 04:47:24.604583 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-31 04:47:24.604697 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:47:24.604714 | orchestrator | 2026-03-31 04:47:24.604727 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-31 04:47:24.604739 | orchestrator | Tuesday 31 March 2026 04:46:55 +0000 (0:00:00.480) 0:12:28.271 ********* 2026-03-31 04:47:24.604749 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-31 04:47:24.604761 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-31 04:47:24.604771 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-31 04:47:24.604781 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:47:24.604791 | orchestrator | 2026-03-31 04:47:24.604802 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-31 04:47:24.604812 | orchestrator | Tuesday 31 March 2026 04:46:55 +0000 (0:00:00.408) 0:12:28.679 ********* 2026-03-31 04:47:24.604822 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:47:24.604833 | orchestrator | 2026-03-31 04:47:24.604843 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-31 04:47:24.604853 | orchestrator | Tuesday 31 March 2026 04:46:56 +0000 (0:00:00.139) 0:12:28.819 ********* 2026-03-31 04:47:24.604864 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-31 04:47:24.604874 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:47:24.604884 | orchestrator | 2026-03-31 04:47:24.604894 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-31 04:47:24.604904 | orchestrator | Tuesday 31 March 2026 04:46:56 +0000 (0:00:00.335) 0:12:29.154 ********* 2026-03-31 04:47:24.604914 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:47:24.604925 | orchestrator | 2026-03-31 04:47:24.604935 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-31 04:47:24.604946 | orchestrator | Tuesday 31 March 2026 04:46:57 +0000 (0:00:00.885) 0:12:30.040 ********* 2026-03-31 04:47:24.604956 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:47:24.604967 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:47:24.604977 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-31 04:47:24.604987 | orchestrator | 2026-03-31 04:47:24.605046 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-31 04:47:24.605057 | orchestrator | Tuesday 31 March 2026 04:46:58 +0000 (0:00:00.989) 0:12:31.030 ********* 2026-03-31 04:47:24.605067 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-2 2026-03-31 04:47:24.605077 | orchestrator | 2026-03-31 04:47:24.605087 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-31 04:47:24.605097 | orchestrator | Tuesday 31 March 2026 04:46:58 +0000 (0:00:00.212) 0:12:31.242 ********* 2026-03-31 04:47:24.605107 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:47:24.605118 | orchestrator | 2026-03-31 04:47:24.605154 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-31 04:47:24.605166 | orchestrator | Tuesday 31 March 2026 04:46:59 +0000 (0:00:00.848) 0:12:32.090 ********* 2026-03-31 04:47:24.605178 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:47:24.605190 | orchestrator | 2026-03-31 04:47:24.605201 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-31 04:47:24.605212 | orchestrator | Tuesday 31 March 2026 04:46:59 +0000 (0:00:00.152) 0:12:32.243 ********* 2026-03-31 04:47:24.605223 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 04:47:24.605234 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 04:47:24.605245 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 04:47:24.605256 | orchestrator | ok: [testbed-node-2 -> {{ groups[mon_group_name][0] }}] 2026-03-31 04:47:24.605266 | orchestrator | 2026-03-31 04:47:24.605278 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-31 04:47:24.605289 | orchestrator | Tuesday 31 March 2026 04:47:05 +0000 (0:00:05.990) 0:12:38.233 ********* 2026-03-31 04:47:24.605300 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:47:24.605311 | orchestrator | 2026-03-31 04:47:24.605322 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-31 04:47:24.605333 | orchestrator | Tuesday 31 March 2026 04:47:05 +0000 (0:00:00.196) 0:12:38.430 ********* 2026-03-31 04:47:24.605344 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-31 04:47:24.605355 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-31 04:47:24.605366 | orchestrator | 2026-03-31 04:47:24.605378 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-31 04:47:24.605388 | orchestrator | Tuesday 31 March 2026 04:47:08 +0000 (0:00:02.327) 0:12:40.758 ********* 2026-03-31 04:47:24.605397 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-31 04:47:24.605407 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-31 04:47:24.605417 | orchestrator | 2026-03-31 04:47:24.605427 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-31 04:47:24.605436 | orchestrator | Tuesday 31 March 2026 04:47:09 +0000 (0:00:01.013) 0:12:41.771 ********* 2026-03-31 04:47:24.605446 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:47:24.605456 | orchestrator | 2026-03-31 04:47:24.605465 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-31 04:47:24.605475 | orchestrator | Tuesday 31 March 2026 04:47:09 +0000 (0:00:00.526) 0:12:42.297 ********* 2026-03-31 04:47:24.605485 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:47:24.605494 | orchestrator | 2026-03-31 04:47:24.605504 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-31 04:47:24.605513 | orchestrator | Tuesday 31 March 2026 04:47:09 +0000 (0:00:00.138) 0:12:42.435 ********* 2026-03-31 04:47:24.605523 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:47:24.605533 | orchestrator | 2026-03-31 04:47:24.605555 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-31 04:47:24.605583 | orchestrator | Tuesday 31 March 2026 04:47:09 +0000 (0:00:00.128) 0:12:42.564 ********* 2026-03-31 04:47:24.605594 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-2 2026-03-31 04:47:24.605604 | orchestrator | 2026-03-31 04:47:24.605614 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-31 04:47:24.605623 | orchestrator | Tuesday 31 March 2026 04:47:10 +0000 (0:00:00.261) 0:12:42.825 ********* 2026-03-31 04:47:24.605633 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:47:24.605643 | orchestrator | 2026-03-31 04:47:24.605652 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-31 04:47:24.605662 | orchestrator | Tuesday 31 March 2026 04:47:10 +0000 (0:00:00.157) 0:12:42.983 ********* 2026-03-31 04:47:24.605672 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:47:24.605681 | orchestrator | 2026-03-31 04:47:24.605691 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-31 04:47:24.605709 | orchestrator | Tuesday 31 March 2026 04:47:10 +0000 (0:00:00.149) 0:12:43.132 ********* 2026-03-31 04:47:24.605722 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-2 2026-03-31 04:47:24.605739 | orchestrator | 2026-03-31 04:47:24.605755 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-31 04:47:24.605772 | orchestrator | Tuesday 31 March 2026 04:47:10 +0000 (0:00:00.467) 0:12:43.599 ********* 2026-03-31 04:47:24.605788 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:47:24.605803 | orchestrator | 2026-03-31 04:47:24.605820 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-31 04:47:24.605836 | orchestrator | Tuesday 31 March 2026 04:47:11 +0000 (0:00:01.082) 0:12:44.682 ********* 2026-03-31 04:47:24.605853 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:47:24.605870 | orchestrator | 2026-03-31 04:47:24.605887 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-31 04:47:24.605910 | orchestrator | Tuesday 31 March 2026 04:47:12 +0000 (0:00:00.945) 0:12:45.628 ********* 2026-03-31 04:47:24.605926 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:47:24.605936 | orchestrator | 2026-03-31 04:47:24.605946 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-31 04:47:24.605956 | orchestrator | Tuesday 31 March 2026 04:47:14 +0000 (0:00:01.509) 0:12:47.137 ********* 2026-03-31 04:47:24.605965 | orchestrator | changed: [testbed-node-2] 2026-03-31 04:47:24.605975 | orchestrator | 2026-03-31 04:47:24.605985 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-31 04:47:24.606089 | orchestrator | Tuesday 31 March 2026 04:47:17 +0000 (0:00:02.993) 0:12:50.131 ********* 2026-03-31 04:47:24.606101 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-03-31 04:47:24.606111 | orchestrator | 2026-03-31 04:47:24.606121 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-03-31 04:47:24.606130 | orchestrator | Tuesday 31 March 2026 04:47:17 +0000 (0:00:00.251) 0:12:50.382 ********* 2026-03-31 04:47:24.606140 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-31 04:47:24.606150 | orchestrator | 2026-03-31 04:47:24.606160 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-03-31 04:47:24.606169 | orchestrator | Tuesday 31 March 2026 04:47:19 +0000 (0:00:01.350) 0:12:51.733 ********* 2026-03-31 04:47:24.606179 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-31 04:47:24.606189 | orchestrator | 2026-03-31 04:47:24.606199 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-03-31 04:47:24.606208 | orchestrator | Tuesday 31 March 2026 04:47:20 +0000 (0:00:01.298) 0:12:53.031 ********* 2026-03-31 04:47:24.606218 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:47:24.606228 | orchestrator | 2026-03-31 04:47:24.606237 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-03-31 04:47:24.606247 | orchestrator | Tuesday 31 March 2026 04:47:20 +0000 (0:00:00.314) 0:12:53.346 ********* 2026-03-31 04:47:24.606257 | orchestrator | ok: [testbed-node-2] 2026-03-31 04:47:24.606266 | orchestrator | 2026-03-31 04:47:24.606276 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-03-31 04:47:24.606286 | orchestrator | Tuesday 31 March 2026 04:47:20 +0000 (0:00:00.163) 0:12:53.509 ********* 2026-03-31 04:47:24.606295 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-03-31 04:47:24.606305 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-03-31 04:47:24.606315 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:47:24.606325 | orchestrator | 2026-03-31 04:47:24.606334 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-03-31 04:47:24.606344 | orchestrator | Tuesday 31 March 2026 04:47:21 +0000 (0:00:00.822) 0:12:54.332 ********* 2026-03-31 04:47:24.606354 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-03-31 04:47:24.606363 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-03-31 04:47:24.606383 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-03-31 04:47:24.606393 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-03-31 04:47:24.606403 | orchestrator | skipping: [testbed-node-2] 2026-03-31 04:47:24.606412 | orchestrator | 2026-03-31 04:47:24.606422 | orchestrator | PLAY [Set osd flags] *********************************************************** 2026-03-31 04:47:24.606432 | orchestrator | 2026-03-31 04:47:24.606442 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-31 04:47:24.606451 | orchestrator | Tuesday 31 March 2026 04:47:22 +0000 (0:00:01.346) 0:12:55.678 ********* 2026-03-31 04:47:24.606461 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:47:24.606471 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:47:24.606481 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:47:24.606490 | orchestrator | 2026-03-31 04:47:24.606500 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-31 04:47:24.606510 | orchestrator | Tuesday 31 March 2026 04:47:23 +0000 (0:00:00.638) 0:12:56.316 ********* 2026-03-31 04:47:24.606519 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:47:24.606536 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:47:24.606546 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:47:24.606556 | orchestrator | 2026-03-31 04:47:24.606575 | orchestrator | TASK [Get pool list] *********************************************************** 2026-03-31 04:47:29.145662 | orchestrator | Tuesday 31 March 2026 04:47:24 +0000 (0:00:00.950) 0:12:57.267 ********* 2026-03-31 04:47:29.145774 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-31 04:47:29.145792 | orchestrator | 2026-03-31 04:47:29.145806 | orchestrator | TASK [Get balancer module status] ********************************************** 2026-03-31 04:47:29.145817 | orchestrator | Tuesday 31 March 2026 04:47:26 +0000 (0:00:02.065) 0:12:59.333 ********* 2026-03-31 04:47:29.145828 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-31 04:47:29.145840 | orchestrator | 2026-03-31 04:47:29.145851 | orchestrator | TASK [Set_fact pools_pgautoscaler_mode] **************************************** 2026-03-31 04:47:29.145862 | orchestrator | Tuesday 31 March 2026 04:47:28 +0000 (0:00:01.910) 0:13:01.243 ********* 2026-03-31 04:47:29.145929 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 1, 'pool_name': '.mgr', 'create_time': '2026-03-31T02:59:23.814886+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '21', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_acting': 6.059999942779541, 'score_stable': 6.059999942779541, 'optimal_score': 0.33000001311302185, 'raw_score_acting': 2, 'raw_score_stable': 2, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-31 04:47:29.146189 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 2, 'pool_name': 'cephfs_data', 'create_time': '2026-03-31T03:00:37.351046+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '33', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '31', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'cephfs': {'data': 'cephfs'}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-31 04:47:29.146217 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 3, 'pool_name': 'cephfs_metadata', 'create_time': '2026-03-31T03:00:40.668667+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 16, 'pg_placement_num': 16, 'pg_placement_num_target': 16, 'pg_num_target': 16, 'pg_num_pending': 16, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '80', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '31', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_autoscale_bias': 4, 'pg_num_min': 16, 'recovery_priority': 5}, 'application_metadata': {'cephfs': {'metadata': 'cephfs'}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-31 04:47:29.146261 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 4, 'pool_name': 'default.rgw.buckets.data', 'create_time': '2026-03-31T03:01:43.765803+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '71', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '65', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.1299999952316284, 'score_stable': 1.1299999952316284, 'optimal_score': 1, 'raw_score_acting': 1.1299999952316284, 'raw_score_stable': 1.1299999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-31 04:47:29.584068 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 5, 'pool_name': 'default.rgw.buckets.index', 'create_time': '2026-03-31T03:01:49.390052+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '71', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '67', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-31 04:47:29.584211 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 6, 'pool_name': 'default.rgw.control', 'create_time': '2026-03-31T03:01:55.556222+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '71', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '67', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 2.059999942779541, 'score_stable': 2.059999942779541, 'optimal_score': 1, 'raw_score_acting': 2.059999942779541, 'raw_score_stable': 2.059999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-31 04:47:29.584251 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 7, 'pool_name': 'default.rgw.log', 'create_time': '2026-03-31T03:02:01.786423+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '177', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '69', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-31 04:47:29.584279 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 8, 'pool_name': 'default.rgw.meta', 'create_time': '2026-03-31T03:02:08.089379+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '71', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '69', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-31 04:47:29.584303 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 9, 'pool_name': '.rgw.root', 'create_time': '2026-03-31T03:02:21.073733+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '127', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '115', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-31 04:47:30.357563 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 10, 'pool_name': 'backups', 'create_time': '2026-03-31T03:03:09.263752+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '102', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 102, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 2.059999942779541, 'score_stable': 2.059999942779541, 'optimal_score': 1, 'raw_score_acting': 2.059999942779541, 'raw_score_stable': 2.059999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-31 04:47:30.357693 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 11, 'pool_name': 'volumes', 'create_time': '2026-03-31T03:03:18.464568+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '111', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 111, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.8799999952316284, 'score_stable': 1.8799999952316284, 'optimal_score': 1, 'raw_score_acting': 1.8799999952316284, 'raw_score_stable': 1.8799999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-31 04:47:30.357739 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 12, 'pool_name': 'images', 'create_time': '2026-03-31T03:03:27.251107+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '186', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 6, 'snap_epoch': 186, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-31 04:47:30.357755 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 13, 'pool_name': 'metrics', 'create_time': '2026-03-31T03:03:36.215609+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '126', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 126, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-31 04:47:30.357790 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 14, 'pool_name': 'vms', 'create_time': '2026-03-31T03:03:45.391041+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '134', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 134, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-31 04:48:52.823262 | orchestrator | 2026-03-31 04:48:52.823373 | orchestrator | TASK [Disable balancer] ******************************************************** 2026-03-31 04:48:52.823390 | orchestrator | Tuesday 31 March 2026 04:47:30 +0000 (0:00:01.784) 0:13:03.028 ********* 2026-03-31 04:48:52.823401 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-31 04:48:52.823411 | orchestrator | 2026-03-31 04:48:52.823422 | orchestrator | TASK [Disable pg autoscale on pools] ******************************************* 2026-03-31 04:48:52.823432 | orchestrator | Tuesday 31 March 2026 04:47:32 +0000 (0:00:01.909) 0:13:04.937 ********* 2026-03-31 04:48:52.823443 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-03-31 04:48:52.823454 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-03-31 04:48:52.823465 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-03-31 04:48:52.823475 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-03-31 04:48:52.823486 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-03-31 04:48:52.823496 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-03-31 04:48:52.823507 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-03-31 04:48:52.823517 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-03-31 04:48:52.823556 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-03-31 04:48:52.823567 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-03-31 04:48:52.823577 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-03-31 04:48:52.823587 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-03-31 04:48:52.823597 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-03-31 04:48:52.823607 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-03-31 04:48:52.823617 | orchestrator | 2026-03-31 04:48:52.823627 | orchestrator | TASK [Set osd flags] *********************************************************** 2026-03-31 04:48:52.823637 | orchestrator | Tuesday 31 March 2026 04:48:40 +0000 (0:01:08.555) 0:14:13.492 ********* 2026-03-31 04:48:52.823647 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-03-31 04:48:52.823657 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-03-31 04:48:52.823667 | orchestrator | 2026-03-31 04:48:52.823676 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-03-31 04:48:52.823686 | orchestrator | 2026-03-31 04:48:52.823696 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-31 04:48:52.823706 | orchestrator | Tuesday 31 March 2026 04:48:45 +0000 (0:00:04.773) 0:14:18.266 ********* 2026-03-31 04:48:52.823716 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-03-31 04:48:52.823726 | orchestrator | 2026-03-31 04:48:52.823736 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-31 04:48:52.823746 | orchestrator | Tuesday 31 March 2026 04:48:45 +0000 (0:00:00.246) 0:14:18.512 ********* 2026-03-31 04:48:52.823756 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:48:52.823767 | orchestrator | 2026-03-31 04:48:52.823777 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-31 04:48:52.823787 | orchestrator | Tuesday 31 March 2026 04:48:46 +0000 (0:00:00.455) 0:14:18.968 ********* 2026-03-31 04:48:52.823796 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:48:52.823806 | orchestrator | 2026-03-31 04:48:52.823816 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-31 04:48:52.823826 | orchestrator | Tuesday 31 March 2026 04:48:46 +0000 (0:00:00.162) 0:14:19.130 ********* 2026-03-31 04:48:52.823836 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:48:52.823849 | orchestrator | 2026-03-31 04:48:52.823864 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-31 04:48:52.823882 | orchestrator | Tuesday 31 March 2026 04:48:47 +0000 (0:00:00.749) 0:14:19.880 ********* 2026-03-31 04:48:52.823898 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:48:52.823914 | orchestrator | 2026-03-31 04:48:52.823927 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-31 04:48:52.823937 | orchestrator | Tuesday 31 March 2026 04:48:47 +0000 (0:00:00.127) 0:14:20.007 ********* 2026-03-31 04:48:52.823947 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:48:52.823957 | orchestrator | 2026-03-31 04:48:52.823967 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-31 04:48:52.823991 | orchestrator | Tuesday 31 March 2026 04:48:47 +0000 (0:00:00.162) 0:14:20.170 ********* 2026-03-31 04:48:52.824001 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:48:52.824011 | orchestrator | 2026-03-31 04:48:52.824022 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-31 04:48:52.824032 | orchestrator | Tuesday 31 March 2026 04:48:47 +0000 (0:00:00.157) 0:14:20.328 ********* 2026-03-31 04:48:52.824042 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:48:52.824052 | orchestrator | 2026-03-31 04:48:52.824062 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-31 04:48:52.824096 | orchestrator | Tuesday 31 March 2026 04:48:47 +0000 (0:00:00.151) 0:14:20.479 ********* 2026-03-31 04:48:52.824164 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:48:52.824175 | orchestrator | 2026-03-31 04:48:52.824185 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-31 04:48:52.824195 | orchestrator | Tuesday 31 March 2026 04:48:47 +0000 (0:00:00.138) 0:14:20.618 ********* 2026-03-31 04:48:52.824205 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:48:52.824215 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:48:52.824224 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:48:52.824234 | orchestrator | 2026-03-31 04:48:52.824244 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-31 04:48:52.824254 | orchestrator | Tuesday 31 March 2026 04:48:48 +0000 (0:00:00.674) 0:14:21.292 ********* 2026-03-31 04:48:52.824263 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:48:52.824273 | orchestrator | 2026-03-31 04:48:52.824283 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-31 04:48:52.824293 | orchestrator | Tuesday 31 March 2026 04:48:48 +0000 (0:00:00.248) 0:14:21.541 ********* 2026-03-31 04:48:52.824302 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:48:52.824312 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:48:52.824322 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:48:52.824332 | orchestrator | 2026-03-31 04:48:52.824342 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-31 04:48:52.824351 | orchestrator | Tuesday 31 March 2026 04:48:51 +0000 (0:00:02.197) 0:14:23.738 ********* 2026-03-31 04:48:52.824361 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-31 04:48:52.824371 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-31 04:48:52.824381 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-31 04:48:52.824391 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:48:52.824401 | orchestrator | 2026-03-31 04:48:52.824411 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-31 04:48:52.824420 | orchestrator | Tuesday 31 March 2026 04:48:51 +0000 (0:00:00.437) 0:14:24.176 ********* 2026-03-31 04:48:52.824431 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-31 04:48:52.824444 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-31 04:48:52.824454 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-31 04:48:52.824464 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:48:52.824474 | orchestrator | 2026-03-31 04:48:52.824484 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-31 04:48:52.824494 | orchestrator | Tuesday 31 March 2026 04:48:52 +0000 (0:00:00.933) 0:14:25.110 ********* 2026-03-31 04:48:52.824506 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 04:48:52.824526 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 04:48:52.824542 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 04:48:52.824552 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:48:52.824562 | orchestrator | 2026-03-31 04:48:52.824572 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-31 04:48:52.824582 | orchestrator | Tuesday 31 March 2026 04:48:52 +0000 (0:00:00.175) 0:14:25.286 ********* 2026-03-31 04:48:52.824601 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '2a470704af4f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-31 04:48:49.391738', 'end': '2026-03-31 04:48:49.442889', 'delta': '0:00:00.051151', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2a470704af4f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-31 04:48:56.853728 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '72281537ffe8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-31 04:48:49.948123', 'end': '2026-03-31 04:48:49.994098', 'delta': '0:00:00.045975', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['72281537ffe8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-31 04:48:56.853844 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '4f3969f3506a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-31 04:48:50.828418', 'end': '2026-03-31 04:48:50.878315', 'delta': '0:00:00.049897', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['4f3969f3506a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-31 04:48:56.853861 | orchestrator | 2026-03-31 04:48:56.853876 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-31 04:48:56.853889 | orchestrator | Tuesday 31 March 2026 04:48:52 +0000 (0:00:00.206) 0:14:25.492 ********* 2026-03-31 04:48:56.853900 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:48:56.853913 | orchestrator | 2026-03-31 04:48:56.853924 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-31 04:48:56.853960 | orchestrator | Tuesday 31 March 2026 04:48:53 +0000 (0:00:00.274) 0:14:25.766 ********* 2026-03-31 04:48:56.853971 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:48:56.853983 | orchestrator | 2026-03-31 04:48:56.853994 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-31 04:48:56.854004 | orchestrator | Tuesday 31 March 2026 04:48:53 +0000 (0:00:00.876) 0:14:26.642 ********* 2026-03-31 04:48:56.854092 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:48:56.854108 | orchestrator | 2026-03-31 04:48:56.854169 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-31 04:48:56.854180 | orchestrator | Tuesday 31 March 2026 04:48:54 +0000 (0:00:00.169) 0:14:26.811 ********* 2026-03-31 04:48:56.854191 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-31 04:48:56.854202 | orchestrator | 2026-03-31 04:48:56.854213 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-31 04:48:56.854226 | orchestrator | Tuesday 31 March 2026 04:48:55 +0000 (0:00:00.934) 0:14:27.746 ********* 2026-03-31 04:48:56.854239 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:48:56.854251 | orchestrator | 2026-03-31 04:48:56.854263 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-31 04:48:56.854275 | orchestrator | Tuesday 31 March 2026 04:48:55 +0000 (0:00:00.154) 0:14:27.900 ********* 2026-03-31 04:48:56.854288 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:48:56.854300 | orchestrator | 2026-03-31 04:48:56.854313 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-31 04:48:56.854325 | orchestrator | Tuesday 31 March 2026 04:48:55 +0000 (0:00:00.115) 0:14:28.016 ********* 2026-03-31 04:48:56.854339 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:48:56.854352 | orchestrator | 2026-03-31 04:48:56.854378 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-31 04:48:56.854392 | orchestrator | Tuesday 31 March 2026 04:48:55 +0000 (0:00:00.252) 0:14:28.268 ********* 2026-03-31 04:48:56.854405 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:48:56.854417 | orchestrator | 2026-03-31 04:48:56.854429 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-31 04:48:56.854442 | orchestrator | Tuesday 31 March 2026 04:48:55 +0000 (0:00:00.136) 0:14:28.404 ********* 2026-03-31 04:48:56.854454 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:48:56.854467 | orchestrator | 2026-03-31 04:48:56.854479 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-31 04:48:56.854491 | orchestrator | Tuesday 31 March 2026 04:48:55 +0000 (0:00:00.139) 0:14:28.544 ********* 2026-03-31 04:48:56.854504 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:48:56.854517 | orchestrator | 2026-03-31 04:48:56.854530 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-31 04:48:56.854542 | orchestrator | Tuesday 31 March 2026 04:48:56 +0000 (0:00:00.187) 0:14:28.731 ********* 2026-03-31 04:48:56.854555 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:48:56.854567 | orchestrator | 2026-03-31 04:48:56.854580 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-31 04:48:56.854591 | orchestrator | Tuesday 31 March 2026 04:48:56 +0000 (0:00:00.131) 0:14:28.862 ********* 2026-03-31 04:48:56.854602 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:48:56.854613 | orchestrator | 2026-03-31 04:48:56.854624 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-31 04:48:56.854635 | orchestrator | Tuesday 31 March 2026 04:48:56 +0000 (0:00:00.155) 0:14:29.018 ********* 2026-03-31 04:48:56.854668 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:48:56.854686 | orchestrator | 2026-03-31 04:48:56.854705 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-31 04:48:56.854724 | orchestrator | Tuesday 31 March 2026 04:48:56 +0000 (0:00:00.134) 0:14:29.152 ********* 2026-03-31 04:48:56.854742 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:48:56.854759 | orchestrator | 2026-03-31 04:48:56.854778 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-31 04:48:56.854812 | orchestrator | Tuesday 31 March 2026 04:48:56 +0000 (0:00:00.158) 0:14:29.310 ********* 2026-03-31 04:48:56.854832 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:48:56.854853 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--67174221--9040--517a--ae84--daf8ebd704d7-osd--block--67174221--9040--517a--ae84--daf8ebd704d7', 'dm-uuid-LVM-KejqHBdnFtLSyyC9R84nyz1yANxrpRIXzilsodjHoTjpW17LoAebYG18loNV682y'], 'uuids': ['e0243936-4e5c-4d79-8eb8-83df85650a2f'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c466d3ef', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['zilsod-jHoT-jpW1-7LoA-ebYG-18lo-NV682y']}})  2026-03-31 04:48:56.854875 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a878a648-90f8-45a8-8930-74e801ae2e4e', 'scsi-SQEMU_QEMU_HARDDISK_a878a648-90f8-45a8-8930-74e801ae2e4e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a878a648', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-31 04:48:56.854905 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-lFSq2g-b3FP-rBDh-oytj-DsQd-47zI-8ZR1ba', 'scsi-0QEMU_QEMU_HARDDISK_820fa545-b298-47e1-b072-447ef233e5c9', 'scsi-SQEMU_QEMU_HARDDISK_820fa545-b298-47e1-b072-447ef233e5c9'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '820fa545', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--dad98f55--09f4--5a2b--a5c7--aafce2660c53-osd--block--dad98f55--09f4--5a2b--a5c7--aafce2660c53']}})  2026-03-31 04:48:56.854925 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:48:56.854945 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:48:56.854981 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-31-01-38-49-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-31 04:48:57.519101 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:48:57.519266 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ttbUQt-J3i2-5YBf-d39y-c024-Mn1f-tAcrtm', 'dm-uuid-CRYPT-LUKS2-c1688bff06c1489bb542bf83ea59d0b8-ttbUQt-J3i2-5YBf-d39y-c024-Mn1f-tAcrtm'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-31 04:48:57.519284 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:48:57.519299 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--dad98f55--09f4--5a2b--a5c7--aafce2660c53-osd--block--dad98f55--09f4--5a2b--a5c7--aafce2660c53', 'dm-uuid-LVM-3PGokd0XE9nIVZhiheUbxNcBNNscsDrxttbUQtJ3i25YBfd39yc024Mn1ftAcrtm'], 'uuids': ['c1688bff-06c1-489b-b542-bf83ea59d0b8'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '820fa545', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['ttbUQt-J3i2-5YBf-d39y-c024-Mn1f-tAcrtm']}})  2026-03-31 04:48:57.519330 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ysmeMC-hqe7-I7iJ-JTkz-gYYz-B5UB-UbMPzu', 'scsi-0QEMU_QEMU_HARDDISK_c466d3ef-6614-47a1-86d1-ef83336ce84c', 'scsi-SQEMU_QEMU_HARDDISK_c466d3ef-6614-47a1-86d1-ef83336ce84c'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c466d3ef', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--67174221--9040--517a--ae84--daf8ebd704d7-osd--block--67174221--9040--517a--ae84--daf8ebd704d7']}})  2026-03-31 04:48:57.519343 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:48:57.519383 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '53e77e6d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part16', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part14', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part15', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part1', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-31 04:48:57.519420 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:48:57.519434 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:48:57.519452 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-zilsod-jHoT-jpW1-7LoA-ebYG-18lo-NV682y', 'dm-uuid-CRYPT-LUKS2-e02439364e5c4d798eb883df85650a2f-zilsod-jHoT-jpW1-7LoA-ebYG-18lo-NV682y'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-31 04:48:57.519466 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:48:57.519478 | orchestrator | 2026-03-31 04:48:57.519490 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-31 04:48:57.519502 | orchestrator | Tuesday 31 March 2026 04:48:57 +0000 (0:00:00.678) 0:14:29.989 ********* 2026-03-31 04:48:57.519516 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:48:57.519544 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--67174221--9040--517a--ae84--daf8ebd704d7-osd--block--67174221--9040--517a--ae84--daf8ebd704d7', 'dm-uuid-LVM-KejqHBdnFtLSyyC9R84nyz1yANxrpRIXzilsodjHoTjpW17LoAebYG18loNV682y'], 'uuids': ['e0243936-4e5c-4d79-8eb8-83df85650a2f'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c466d3ef', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['zilsod-jHoT-jpW1-7LoA-ebYG-18lo-NV682y']}}, 'ansible_loop_var': 'item'})  2026-03-31 04:48:57.720265 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a878a648-90f8-45a8-8930-74e801ae2e4e', 'scsi-SQEMU_QEMU_HARDDISK_a878a648-90f8-45a8-8930-74e801ae2e4e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a878a648', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:48:57.720394 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-lFSq2g-b3FP-rBDh-oytj-DsQd-47zI-8ZR1ba', 'scsi-0QEMU_QEMU_HARDDISK_820fa545-b298-47e1-b072-447ef233e5c9', 'scsi-SQEMU_QEMU_HARDDISK_820fa545-b298-47e1-b072-447ef233e5c9'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '820fa545', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--dad98f55--09f4--5a2b--a5c7--aafce2660c53-osd--block--dad98f55--09f4--5a2b--a5c7--aafce2660c53']}}, 'ansible_loop_var': 'item'})  2026-03-31 04:48:57.720449 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:48:57.720474 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:48:57.720524 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-31-01-38-49-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:48:57.720645 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:48:57.720672 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ttbUQt-J3i2-5YBf-d39y-c024-Mn1f-tAcrtm', 'dm-uuid-CRYPT-LUKS2-c1688bff06c1489bb542bf83ea59d0b8-ttbUQt-J3i2-5YBf-d39y-c024-Mn1f-tAcrtm'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:48:57.720692 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:48:57.720722 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--dad98f55--09f4--5a2b--a5c7--aafce2660c53-osd--block--dad98f55--09f4--5a2b--a5c7--aafce2660c53', 'dm-uuid-LVM-3PGokd0XE9nIVZhiheUbxNcBNNscsDrxttbUQtJ3i25YBfd39yc024Mn1ftAcrtm'], 'uuids': ['c1688bff-06c1-489b-b542-bf83ea59d0b8'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '820fa545', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['ttbUQt-J3i2-5YBf-d39y-c024-Mn1f-tAcrtm']}}, 'ansible_loop_var': 'item'})  2026-03-31 04:48:57.720745 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ysmeMC-hqe7-I7iJ-JTkz-gYYz-B5UB-UbMPzu', 'scsi-0QEMU_QEMU_HARDDISK_c466d3ef-6614-47a1-86d1-ef83336ce84c', 'scsi-SQEMU_QEMU_HARDDISK_c466d3ef-6614-47a1-86d1-ef83336ce84c'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c466d3ef', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--67174221--9040--517a--ae84--daf8ebd704d7-osd--block--67174221--9040--517a--ae84--daf8ebd704d7']}}, 'ansible_loop_var': 'item'})  2026-03-31 04:48:57.720793 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:49:02.480960 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '53e77e6d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part16', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part14', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part15', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part1', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:49:02.481087 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:49:02.481204 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:49:02.481292 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-zilsod-jHoT-jpW1-7LoA-ebYG-18lo-NV682y', 'dm-uuid-CRYPT-LUKS2-e02439364e5c4d798eb883df85650a2f-zilsod-jHoT-jpW1-7LoA-ebYG-18lo-NV682y'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:49:02.481306 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:02.481319 | orchestrator | 2026-03-31 04:49:02.481330 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-31 04:49:02.481342 | orchestrator | Tuesday 31 March 2026 04:48:57 +0000 (0:00:00.403) 0:14:30.392 ********* 2026-03-31 04:49:02.481352 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:49:02.481363 | orchestrator | 2026-03-31 04:49:02.481372 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-31 04:49:02.481382 | orchestrator | Tuesday 31 March 2026 04:48:58 +0000 (0:00:00.488) 0:14:30.881 ********* 2026-03-31 04:49:02.481392 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:49:02.481402 | orchestrator | 2026-03-31 04:49:02.481411 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-31 04:49:02.481421 | orchestrator | Tuesday 31 March 2026 04:48:58 +0000 (0:00:00.134) 0:14:31.015 ********* 2026-03-31 04:49:02.481431 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:49:02.481441 | orchestrator | 2026-03-31 04:49:02.481451 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-31 04:49:02.481463 | orchestrator | Tuesday 31 March 2026 04:48:58 +0000 (0:00:00.467) 0:14:31.483 ********* 2026-03-31 04:49:02.481474 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:02.481485 | orchestrator | 2026-03-31 04:49:02.481496 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-31 04:49:02.481507 | orchestrator | Tuesday 31 March 2026 04:48:58 +0000 (0:00:00.126) 0:14:31.609 ********* 2026-03-31 04:49:02.481518 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:02.481530 | orchestrator | 2026-03-31 04:49:02.481541 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-31 04:49:02.481552 | orchestrator | Tuesday 31 March 2026 04:48:59 +0000 (0:00:00.246) 0:14:31.855 ********* 2026-03-31 04:49:02.481563 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:02.481574 | orchestrator | 2026-03-31 04:49:02.481585 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-31 04:49:02.481596 | orchestrator | Tuesday 31 March 2026 04:48:59 +0000 (0:00:00.155) 0:14:32.010 ********* 2026-03-31 04:49:02.481617 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-31 04:49:02.481628 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-31 04:49:02.481639 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-31 04:49:02.481650 | orchestrator | 2026-03-31 04:49:02.481661 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-31 04:49:02.481671 | orchestrator | Tuesday 31 March 2026 04:49:00 +0000 (0:00:00.968) 0:14:32.979 ********* 2026-03-31 04:49:02.481681 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-31 04:49:02.481691 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-31 04:49:02.481708 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-31 04:49:02.481719 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:02.481728 | orchestrator | 2026-03-31 04:49:02.481738 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-31 04:49:02.481748 | orchestrator | Tuesday 31 March 2026 04:49:00 +0000 (0:00:00.160) 0:14:33.140 ********* 2026-03-31 04:49:02.481757 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-03-31 04:49:02.481768 | orchestrator | 2026-03-31 04:49:02.481778 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-31 04:49:02.481790 | orchestrator | Tuesday 31 March 2026 04:49:00 +0000 (0:00:00.230) 0:14:33.370 ********* 2026-03-31 04:49:02.481800 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:02.481810 | orchestrator | 2026-03-31 04:49:02.481820 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-31 04:49:02.481829 | orchestrator | Tuesday 31 March 2026 04:49:00 +0000 (0:00:00.150) 0:14:33.521 ********* 2026-03-31 04:49:02.481839 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:02.481848 | orchestrator | 2026-03-31 04:49:02.481858 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-31 04:49:02.481868 | orchestrator | Tuesday 31 March 2026 04:49:01 +0000 (0:00:00.433) 0:14:33.954 ********* 2026-03-31 04:49:02.481877 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:02.481887 | orchestrator | 2026-03-31 04:49:02.481897 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-31 04:49:02.481906 | orchestrator | Tuesday 31 March 2026 04:49:01 +0000 (0:00:00.146) 0:14:34.101 ********* 2026-03-31 04:49:02.481916 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:49:02.481926 | orchestrator | 2026-03-31 04:49:02.481935 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-31 04:49:02.481945 | orchestrator | Tuesday 31 March 2026 04:49:01 +0000 (0:00:00.248) 0:14:34.350 ********* 2026-03-31 04:49:02.481955 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-31 04:49:02.481964 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-31 04:49:02.481974 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-31 04:49:02.481984 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:02.481993 | orchestrator | 2026-03-31 04:49:02.482003 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-31 04:49:02.482013 | orchestrator | Tuesday 31 March 2026 04:49:02 +0000 (0:00:00.399) 0:14:34.750 ********* 2026-03-31 04:49:02.482085 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-31 04:49:02.482095 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-31 04:49:02.482105 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-31 04:49:02.482139 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:02.482151 | orchestrator | 2026-03-31 04:49:02.482170 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-31 04:49:17.157782 | orchestrator | Tuesday 31 March 2026 04:49:02 +0000 (0:00:00.400) 0:14:35.151 ********* 2026-03-31 04:49:17.157915 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-31 04:49:17.157959 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-31 04:49:17.157972 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-31 04:49:17.157983 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:17.157995 | orchestrator | 2026-03-31 04:49:17.158007 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-31 04:49:17.158090 | orchestrator | Tuesday 31 March 2026 04:49:02 +0000 (0:00:00.406) 0:14:35.557 ********* 2026-03-31 04:49:17.158104 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:49:17.158117 | orchestrator | 2026-03-31 04:49:17.158128 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-31 04:49:17.158174 | orchestrator | Tuesday 31 March 2026 04:49:03 +0000 (0:00:00.152) 0:14:35.710 ********* 2026-03-31 04:49:17.158186 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-31 04:49:17.158197 | orchestrator | 2026-03-31 04:49:17.158208 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-31 04:49:17.158220 | orchestrator | Tuesday 31 March 2026 04:49:03 +0000 (0:00:00.371) 0:14:36.081 ********* 2026-03-31 04:49:17.158232 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:49:17.158247 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:49:17.158260 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:49:17.158272 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-31 04:49:17.158284 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-31 04:49:17.158296 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-31 04:49:17.158309 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-31 04:49:17.158321 | orchestrator | 2026-03-31 04:49:17.158333 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-31 04:49:17.158345 | orchestrator | Tuesday 31 March 2026 04:49:04 +0000 (0:00:01.162) 0:14:37.243 ********* 2026-03-31 04:49:17.158357 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:49:17.158370 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:49:17.158382 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:49:17.158394 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-31 04:49:17.158420 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-31 04:49:17.158432 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-31 04:49:17.158443 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-31 04:49:17.158453 | orchestrator | 2026-03-31 04:49:17.158464 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-03-31 04:49:17.158475 | orchestrator | Tuesday 31 March 2026 04:49:06 +0000 (0:00:01.675) 0:14:38.919 ********* 2026-03-31 04:49:17.158486 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:49:17.158497 | orchestrator | 2026-03-31 04:49:17.158508 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-03-31 04:49:17.158518 | orchestrator | Tuesday 31 March 2026 04:49:06 +0000 (0:00:00.453) 0:14:39.373 ********* 2026-03-31 04:49:17.158529 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:49:17.158540 | orchestrator | 2026-03-31 04:49:17.158551 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-03-31 04:49:17.158562 | orchestrator | Tuesday 31 March 2026 04:49:06 +0000 (0:00:00.143) 0:14:39.516 ********* 2026-03-31 04:49:17.158572 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:49:17.158583 | orchestrator | 2026-03-31 04:49:17.158594 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-03-31 04:49:17.158614 | orchestrator | Tuesday 31 March 2026 04:49:07 +0000 (0:00:00.876) 0:14:40.393 ********* 2026-03-31 04:49:17.158625 | orchestrator | changed: [testbed-node-3] => (item=2) 2026-03-31 04:49:17.158636 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-03-31 04:49:17.158647 | orchestrator | 2026-03-31 04:49:17.158658 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-31 04:49:17.158668 | orchestrator | Tuesday 31 March 2026 04:49:10 +0000 (0:00:03.134) 0:14:43.528 ********* 2026-03-31 04:49:17.158679 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-03-31 04:49:17.158692 | orchestrator | 2026-03-31 04:49:17.158703 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-31 04:49:17.158714 | orchestrator | Tuesday 31 March 2026 04:49:11 +0000 (0:00:00.189) 0:14:43.717 ********* 2026-03-31 04:49:17.158724 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-03-31 04:49:17.158735 | orchestrator | 2026-03-31 04:49:17.158746 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-31 04:49:17.158757 | orchestrator | Tuesday 31 March 2026 04:49:11 +0000 (0:00:00.212) 0:14:43.930 ********* 2026-03-31 04:49:17.158768 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:17.158778 | orchestrator | 2026-03-31 04:49:17.158789 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-31 04:49:17.158800 | orchestrator | Tuesday 31 March 2026 04:49:11 +0000 (0:00:00.146) 0:14:44.077 ********* 2026-03-31 04:49:17.158811 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:49:17.158822 | orchestrator | 2026-03-31 04:49:17.158833 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-31 04:49:17.158864 | orchestrator | Tuesday 31 March 2026 04:49:11 +0000 (0:00:00.494) 0:14:44.572 ********* 2026-03-31 04:49:17.158876 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:49:17.158887 | orchestrator | 2026-03-31 04:49:17.158898 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-31 04:49:17.158909 | orchestrator | Tuesday 31 March 2026 04:49:12 +0000 (0:00:00.511) 0:14:45.084 ********* 2026-03-31 04:49:17.158919 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:49:17.158930 | orchestrator | 2026-03-31 04:49:17.158941 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-31 04:49:17.158952 | orchestrator | Tuesday 31 March 2026 04:49:12 +0000 (0:00:00.526) 0:14:45.610 ********* 2026-03-31 04:49:17.158963 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:17.158974 | orchestrator | 2026-03-31 04:49:17.158985 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-31 04:49:17.158996 | orchestrator | Tuesday 31 March 2026 04:49:13 +0000 (0:00:00.125) 0:14:45.736 ********* 2026-03-31 04:49:17.159016 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:17.159027 | orchestrator | 2026-03-31 04:49:17.159038 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-31 04:49:17.159049 | orchestrator | Tuesday 31 March 2026 04:49:13 +0000 (0:00:00.139) 0:14:45.875 ********* 2026-03-31 04:49:17.159060 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:17.159071 | orchestrator | 2026-03-31 04:49:17.159082 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-31 04:49:17.159093 | orchestrator | Tuesday 31 March 2026 04:49:13 +0000 (0:00:00.159) 0:14:46.034 ********* 2026-03-31 04:49:17.159104 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:49:17.159115 | orchestrator | 2026-03-31 04:49:17.159126 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-31 04:49:17.159162 | orchestrator | Tuesday 31 March 2026 04:49:14 +0000 (0:00:00.783) 0:14:46.818 ********* 2026-03-31 04:49:17.159173 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:49:17.159185 | orchestrator | 2026-03-31 04:49:17.159195 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-31 04:49:17.159206 | orchestrator | Tuesday 31 March 2026 04:49:14 +0000 (0:00:00.553) 0:14:47.371 ********* 2026-03-31 04:49:17.159225 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:17.159236 | orchestrator | 2026-03-31 04:49:17.159247 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-31 04:49:17.159258 | orchestrator | Tuesday 31 March 2026 04:49:14 +0000 (0:00:00.128) 0:14:47.499 ********* 2026-03-31 04:49:17.159268 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:17.159279 | orchestrator | 2026-03-31 04:49:17.159290 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-31 04:49:17.159301 | orchestrator | Tuesday 31 March 2026 04:49:14 +0000 (0:00:00.149) 0:14:47.649 ********* 2026-03-31 04:49:17.159312 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:49:17.159322 | orchestrator | 2026-03-31 04:49:17.159333 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-31 04:49:17.159350 | orchestrator | Tuesday 31 March 2026 04:49:15 +0000 (0:00:00.175) 0:14:47.824 ********* 2026-03-31 04:49:17.159361 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:49:17.159372 | orchestrator | 2026-03-31 04:49:17.159383 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-31 04:49:17.159394 | orchestrator | Tuesday 31 March 2026 04:49:15 +0000 (0:00:00.172) 0:14:47.997 ********* 2026-03-31 04:49:17.159405 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:49:17.159415 | orchestrator | 2026-03-31 04:49:17.159426 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-31 04:49:17.159437 | orchestrator | Tuesday 31 March 2026 04:49:15 +0000 (0:00:00.179) 0:14:48.177 ********* 2026-03-31 04:49:17.159448 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:17.159459 | orchestrator | 2026-03-31 04:49:17.159470 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-31 04:49:17.159480 | orchestrator | Tuesday 31 March 2026 04:49:15 +0000 (0:00:00.141) 0:14:48.318 ********* 2026-03-31 04:49:17.159491 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:17.159502 | orchestrator | 2026-03-31 04:49:17.159513 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-31 04:49:17.159524 | orchestrator | Tuesday 31 March 2026 04:49:15 +0000 (0:00:00.129) 0:14:48.448 ********* 2026-03-31 04:49:17.159535 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:17.159546 | orchestrator | 2026-03-31 04:49:17.159557 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-31 04:49:17.159568 | orchestrator | Tuesday 31 March 2026 04:49:15 +0000 (0:00:00.148) 0:14:48.596 ********* 2026-03-31 04:49:17.159579 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:49:17.159590 | orchestrator | 2026-03-31 04:49:17.159601 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-31 04:49:17.159611 | orchestrator | Tuesday 31 March 2026 04:49:16 +0000 (0:00:00.155) 0:14:48.752 ********* 2026-03-31 04:49:17.159622 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:49:17.159633 | orchestrator | 2026-03-31 04:49:17.159644 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-31 04:49:17.159655 | orchestrator | Tuesday 31 March 2026 04:49:16 +0000 (0:00:00.254) 0:14:49.007 ********* 2026-03-31 04:49:17.159666 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:17.159677 | orchestrator | 2026-03-31 04:49:17.159688 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-31 04:49:17.159699 | orchestrator | Tuesday 31 March 2026 04:49:16 +0000 (0:00:00.403) 0:14:49.411 ********* 2026-03-31 04:49:17.159710 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:17.159721 | orchestrator | 2026-03-31 04:49:17.159732 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-31 04:49:17.159743 | orchestrator | Tuesday 31 March 2026 04:49:16 +0000 (0:00:00.123) 0:14:49.534 ********* 2026-03-31 04:49:17.159754 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:17.159765 | orchestrator | 2026-03-31 04:49:17.159776 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-31 04:49:17.159787 | orchestrator | Tuesday 31 March 2026 04:49:17 +0000 (0:00:00.149) 0:14:49.683 ********* 2026-03-31 04:49:17.159819 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:17.159842 | orchestrator | 2026-03-31 04:49:17.159862 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-31 04:49:28.769206 | orchestrator | Tuesday 31 March 2026 04:49:17 +0000 (0:00:00.142) 0:14:49.826 ********* 2026-03-31 04:49:28.769331 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:28.769349 | orchestrator | 2026-03-31 04:49:28.769362 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-31 04:49:28.769374 | orchestrator | Tuesday 31 March 2026 04:49:17 +0000 (0:00:00.137) 0:14:49.963 ********* 2026-03-31 04:49:28.769385 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:28.769397 | orchestrator | 2026-03-31 04:49:28.769408 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-31 04:49:28.769419 | orchestrator | Tuesday 31 March 2026 04:49:17 +0000 (0:00:00.129) 0:14:50.093 ********* 2026-03-31 04:49:28.769439 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:28.769457 | orchestrator | 2026-03-31 04:49:28.769475 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-31 04:49:28.769495 | orchestrator | Tuesday 31 March 2026 04:49:17 +0000 (0:00:00.130) 0:14:50.223 ********* 2026-03-31 04:49:28.769513 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:28.769530 | orchestrator | 2026-03-31 04:49:28.769549 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-31 04:49:28.769568 | orchestrator | Tuesday 31 March 2026 04:49:17 +0000 (0:00:00.113) 0:14:50.336 ********* 2026-03-31 04:49:28.769586 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:28.769598 | orchestrator | 2026-03-31 04:49:28.769610 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-31 04:49:28.769621 | orchestrator | Tuesday 31 March 2026 04:49:17 +0000 (0:00:00.141) 0:14:50.478 ********* 2026-03-31 04:49:28.769632 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:28.769643 | orchestrator | 2026-03-31 04:49:28.769654 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-31 04:49:28.769665 | orchestrator | Tuesday 31 March 2026 04:49:17 +0000 (0:00:00.133) 0:14:50.612 ********* 2026-03-31 04:49:28.769676 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:28.769687 | orchestrator | 2026-03-31 04:49:28.769698 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-31 04:49:28.769710 | orchestrator | Tuesday 31 March 2026 04:49:18 +0000 (0:00:00.133) 0:14:50.745 ********* 2026-03-31 04:49:28.769722 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:28.769735 | orchestrator | 2026-03-31 04:49:28.769747 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-31 04:49:28.769759 | orchestrator | Tuesday 31 March 2026 04:49:18 +0000 (0:00:00.192) 0:14:50.937 ********* 2026-03-31 04:49:28.769771 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:49:28.769785 | orchestrator | 2026-03-31 04:49:28.769797 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-31 04:49:28.769810 | orchestrator | Tuesday 31 March 2026 04:49:19 +0000 (0:00:00.942) 0:14:51.880 ********* 2026-03-31 04:49:28.769822 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:49:28.769835 | orchestrator | 2026-03-31 04:49:28.769865 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-31 04:49:28.769879 | orchestrator | Tuesday 31 March 2026 04:49:20 +0000 (0:00:01.546) 0:14:53.426 ********* 2026-03-31 04:49:28.769891 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-03-31 04:49:28.769905 | orchestrator | 2026-03-31 04:49:28.769917 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-31 04:49:28.769930 | orchestrator | Tuesday 31 March 2026 04:49:20 +0000 (0:00:00.208) 0:14:53.634 ********* 2026-03-31 04:49:28.769943 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:28.769955 | orchestrator | 2026-03-31 04:49:28.769968 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-31 04:49:28.769980 | orchestrator | Tuesday 31 March 2026 04:49:21 +0000 (0:00:00.145) 0:14:53.779 ********* 2026-03-31 04:49:28.770067 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:28.770082 | orchestrator | 2026-03-31 04:49:28.770095 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-31 04:49:28.770106 | orchestrator | Tuesday 31 March 2026 04:49:21 +0000 (0:00:00.138) 0:14:53.918 ********* 2026-03-31 04:49:28.770117 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-31 04:49:28.770128 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-31 04:49:28.770139 | orchestrator | 2026-03-31 04:49:28.770208 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-31 04:49:28.770221 | orchestrator | Tuesday 31 March 2026 04:49:22 +0000 (0:00:00.782) 0:14:54.701 ********* 2026-03-31 04:49:28.770232 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:49:28.770243 | orchestrator | 2026-03-31 04:49:28.770253 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-31 04:49:28.770264 | orchestrator | Tuesday 31 March 2026 04:49:22 +0000 (0:00:00.456) 0:14:55.158 ********* 2026-03-31 04:49:28.770275 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:28.770286 | orchestrator | 2026-03-31 04:49:28.770297 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-31 04:49:28.770308 | orchestrator | Tuesday 31 March 2026 04:49:22 +0000 (0:00:00.148) 0:14:55.306 ********* 2026-03-31 04:49:28.770319 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:28.770329 | orchestrator | 2026-03-31 04:49:28.770343 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-31 04:49:28.770363 | orchestrator | Tuesday 31 March 2026 04:49:22 +0000 (0:00:00.155) 0:14:55.461 ********* 2026-03-31 04:49:28.770383 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:28.770402 | orchestrator | 2026-03-31 04:49:28.770421 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-31 04:49:28.770440 | orchestrator | Tuesday 31 March 2026 04:49:22 +0000 (0:00:00.150) 0:14:55.612 ********* 2026-03-31 04:49:28.770457 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-03-31 04:49:28.770468 | orchestrator | 2026-03-31 04:49:28.770479 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-31 04:49:28.770510 | orchestrator | Tuesday 31 March 2026 04:49:23 +0000 (0:00:00.246) 0:14:55.859 ********* 2026-03-31 04:49:28.770521 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:49:28.770532 | orchestrator | 2026-03-31 04:49:28.770544 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-31 04:49:28.770555 | orchestrator | Tuesday 31 March 2026 04:49:23 +0000 (0:00:00.750) 0:14:56.610 ********* 2026-03-31 04:49:28.770565 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-31 04:49:28.770576 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-31 04:49:28.770591 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-31 04:49:28.770610 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:28.770629 | orchestrator | 2026-03-31 04:49:28.770648 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-31 04:49:28.770665 | orchestrator | Tuesday 31 March 2026 04:49:24 +0000 (0:00:00.426) 0:14:57.037 ********* 2026-03-31 04:49:28.770683 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:28.770701 | orchestrator | 2026-03-31 04:49:28.770720 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-31 04:49:28.770737 | orchestrator | Tuesday 31 March 2026 04:49:24 +0000 (0:00:00.138) 0:14:57.175 ********* 2026-03-31 04:49:28.770755 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:28.770772 | orchestrator | 2026-03-31 04:49:28.770789 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-31 04:49:28.770807 | orchestrator | Tuesday 31 March 2026 04:49:24 +0000 (0:00:00.166) 0:14:57.342 ********* 2026-03-31 04:49:28.770839 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:28.770857 | orchestrator | 2026-03-31 04:49:28.770875 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-31 04:49:28.770893 | orchestrator | Tuesday 31 March 2026 04:49:24 +0000 (0:00:00.160) 0:14:57.502 ********* 2026-03-31 04:49:28.770911 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:28.770930 | orchestrator | 2026-03-31 04:49:28.770950 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-31 04:49:28.770970 | orchestrator | Tuesday 31 March 2026 04:49:24 +0000 (0:00:00.153) 0:14:57.655 ********* 2026-03-31 04:49:28.770991 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:28.771010 | orchestrator | 2026-03-31 04:49:28.771028 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-31 04:49:28.771047 | orchestrator | Tuesday 31 March 2026 04:49:25 +0000 (0:00:00.164) 0:14:57.820 ********* 2026-03-31 04:49:28.771066 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:49:28.771084 | orchestrator | 2026-03-31 04:49:28.771103 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-31 04:49:28.771122 | orchestrator | Tuesday 31 March 2026 04:49:26 +0000 (0:00:01.462) 0:14:59.283 ********* 2026-03-31 04:49:28.771142 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:49:28.771215 | orchestrator | 2026-03-31 04:49:28.771236 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-31 04:49:28.771256 | orchestrator | Tuesday 31 March 2026 04:49:26 +0000 (0:00:00.142) 0:14:59.425 ********* 2026-03-31 04:49:28.771275 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-03-31 04:49:28.771294 | orchestrator | 2026-03-31 04:49:28.771313 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-31 04:49:28.771332 | orchestrator | Tuesday 31 March 2026 04:49:26 +0000 (0:00:00.234) 0:14:59.660 ********* 2026-03-31 04:49:28.771353 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:28.771373 | orchestrator | 2026-03-31 04:49:28.771393 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-31 04:49:28.771415 | orchestrator | Tuesday 31 March 2026 04:49:27 +0000 (0:00:00.143) 0:14:59.803 ********* 2026-03-31 04:49:28.771435 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:28.771456 | orchestrator | 2026-03-31 04:49:28.771476 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-31 04:49:28.771493 | orchestrator | Tuesday 31 March 2026 04:49:27 +0000 (0:00:00.185) 0:14:59.989 ********* 2026-03-31 04:49:28.771511 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:28.771529 | orchestrator | 2026-03-31 04:49:28.771547 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-31 04:49:28.771567 | orchestrator | Tuesday 31 March 2026 04:49:27 +0000 (0:00:00.155) 0:15:00.144 ********* 2026-03-31 04:49:28.771587 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:28.771607 | orchestrator | 2026-03-31 04:49:28.771627 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-31 04:49:28.771646 | orchestrator | Tuesday 31 March 2026 04:49:27 +0000 (0:00:00.427) 0:15:00.571 ********* 2026-03-31 04:49:28.771666 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:28.771686 | orchestrator | 2026-03-31 04:49:28.771706 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-31 04:49:28.771726 | orchestrator | Tuesday 31 March 2026 04:49:28 +0000 (0:00:00.149) 0:15:00.721 ********* 2026-03-31 04:49:28.771747 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:28.771765 | orchestrator | 2026-03-31 04:49:28.771784 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-31 04:49:28.771803 | orchestrator | Tuesday 31 March 2026 04:49:28 +0000 (0:00:00.145) 0:15:00.866 ********* 2026-03-31 04:49:28.771823 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:28.771843 | orchestrator | 2026-03-31 04:49:28.771863 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-31 04:49:28.771899 | orchestrator | Tuesday 31 March 2026 04:49:28 +0000 (0:00:00.155) 0:15:01.022 ********* 2026-03-31 04:49:28.771919 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:28.771938 | orchestrator | 2026-03-31 04:49:28.771958 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-31 04:49:28.771979 | orchestrator | Tuesday 31 March 2026 04:49:28 +0000 (0:00:00.197) 0:15:01.220 ********* 2026-03-31 04:49:28.771999 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:49:28.772020 | orchestrator | 2026-03-31 04:49:28.772041 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-31 04:49:28.772089 | orchestrator | Tuesday 31 March 2026 04:49:28 +0000 (0:00:00.219) 0:15:01.439 ********* 2026-03-31 04:49:51.441040 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-03-31 04:49:51.441157 | orchestrator | 2026-03-31 04:49:51.441295 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-31 04:49:51.441312 | orchestrator | Tuesday 31 March 2026 04:49:28 +0000 (0:00:00.200) 0:15:01.640 ********* 2026-03-31 04:49:51.441324 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-03-31 04:49:51.441336 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-31 04:49:51.441348 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-31 04:49:51.441476 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-31 04:49:51.441499 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-31 04:49:51.441510 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-31 04:49:51.441521 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-31 04:49:51.441532 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-31 04:49:51.441543 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-31 04:49:51.441555 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-31 04:49:51.441566 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-31 04:49:51.441577 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-31 04:49:51.441588 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-31 04:49:51.441633 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-31 04:49:51.441647 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-03-31 04:49:51.441658 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-03-31 04:49:51.441670 | orchestrator | 2026-03-31 04:49:51.441681 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-31 04:49:51.441692 | orchestrator | Tuesday 31 March 2026 04:49:34 +0000 (0:00:05.489) 0:15:07.130 ********* 2026-03-31 04:49:51.441704 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-03-31 04:49:51.441715 | orchestrator | 2026-03-31 04:49:51.441726 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-31 04:49:51.441737 | orchestrator | Tuesday 31 March 2026 04:49:34 +0000 (0:00:00.209) 0:15:07.339 ********* 2026-03-31 04:49:51.441749 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-31 04:49:51.441761 | orchestrator | 2026-03-31 04:49:51.441778 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-31 04:49:51.441789 | orchestrator | Tuesday 31 March 2026 04:49:35 +0000 (0:00:00.499) 0:15:07.838 ********* 2026-03-31 04:49:51.441800 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-31 04:49:51.441812 | orchestrator | 2026-03-31 04:49:51.441823 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-31 04:49:51.441834 | orchestrator | Tuesday 31 March 2026 04:49:36 +0000 (0:00:01.293) 0:15:09.132 ********* 2026-03-31 04:49:51.441845 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:51.441876 | orchestrator | 2026-03-31 04:49:51.441888 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-31 04:49:51.441899 | orchestrator | Tuesday 31 March 2026 04:49:36 +0000 (0:00:00.125) 0:15:09.257 ********* 2026-03-31 04:49:51.441910 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:51.441921 | orchestrator | 2026-03-31 04:49:51.441932 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-31 04:49:51.441942 | orchestrator | Tuesday 31 March 2026 04:49:36 +0000 (0:00:00.146) 0:15:09.404 ********* 2026-03-31 04:49:51.441953 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:51.441965 | orchestrator | 2026-03-31 04:49:51.441975 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-31 04:49:51.441986 | orchestrator | Tuesday 31 March 2026 04:49:36 +0000 (0:00:00.135) 0:15:09.539 ********* 2026-03-31 04:49:51.441997 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:51.442008 | orchestrator | 2026-03-31 04:49:51.442106 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-31 04:49:51.442127 | orchestrator | Tuesday 31 March 2026 04:49:36 +0000 (0:00:00.126) 0:15:09.665 ********* 2026-03-31 04:49:51.442142 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:51.442159 | orchestrator | 2026-03-31 04:49:51.442204 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-31 04:49:51.442224 | orchestrator | Tuesday 31 March 2026 04:49:37 +0000 (0:00:00.130) 0:15:09.796 ********* 2026-03-31 04:49:51.442238 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:51.442249 | orchestrator | 2026-03-31 04:49:51.442260 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-31 04:49:51.442271 | orchestrator | Tuesday 31 March 2026 04:49:37 +0000 (0:00:00.148) 0:15:09.944 ********* 2026-03-31 04:49:51.442282 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:51.442333 | orchestrator | 2026-03-31 04:49:51.442346 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-31 04:49:51.442357 | orchestrator | Tuesday 31 March 2026 04:49:37 +0000 (0:00:00.135) 0:15:10.079 ********* 2026-03-31 04:49:51.442368 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:51.442379 | orchestrator | 2026-03-31 04:49:51.442390 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-31 04:49:51.442401 | orchestrator | Tuesday 31 March 2026 04:49:37 +0000 (0:00:00.144) 0:15:10.224 ********* 2026-03-31 04:49:51.442412 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:51.442423 | orchestrator | 2026-03-31 04:49:51.442456 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-31 04:49:51.442480 | orchestrator | Tuesday 31 March 2026 04:49:37 +0000 (0:00:00.130) 0:15:10.354 ********* 2026-03-31 04:49:51.442492 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:51.442503 | orchestrator | 2026-03-31 04:49:51.442514 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-31 04:49:51.442525 | orchestrator | Tuesday 31 March 2026 04:49:37 +0000 (0:00:00.138) 0:15:10.493 ********* 2026-03-31 04:49:51.442536 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:49:51.442547 | orchestrator | 2026-03-31 04:49:51.442558 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-31 04:49:51.442569 | orchestrator | Tuesday 31 March 2026 04:49:38 +0000 (0:00:00.201) 0:15:10.695 ********* 2026-03-31 04:49:51.442580 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-31 04:49:51.442591 | orchestrator | 2026-03-31 04:49:51.442603 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-31 04:49:51.442614 | orchestrator | Tuesday 31 March 2026 04:49:41 +0000 (0:00:03.479) 0:15:14.174 ********* 2026-03-31 04:49:51.442625 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-31 04:49:51.442636 | orchestrator | 2026-03-31 04:49:51.442697 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-31 04:49:51.442710 | orchestrator | Tuesday 31 March 2026 04:49:41 +0000 (0:00:00.164) 0:15:14.339 ********* 2026-03-31 04:49:51.442723 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-03-31 04:49:51.442738 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-03-31 04:49:51.442751 | orchestrator | 2026-03-31 04:49:51.442762 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-31 04:49:51.442780 | orchestrator | Tuesday 31 March 2026 04:49:49 +0000 (0:00:07.481) 0:15:21.821 ********* 2026-03-31 04:49:51.442791 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:51.442802 | orchestrator | 2026-03-31 04:49:51.442813 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-31 04:49:51.442824 | orchestrator | Tuesday 31 March 2026 04:49:49 +0000 (0:00:00.136) 0:15:21.957 ********* 2026-03-31 04:49:51.442835 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:51.442846 | orchestrator | 2026-03-31 04:49:51.442857 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-31 04:49:51.442868 | orchestrator | Tuesday 31 March 2026 04:49:49 +0000 (0:00:00.125) 0:15:22.083 ********* 2026-03-31 04:49:51.442878 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:51.442889 | orchestrator | 2026-03-31 04:49:51.442900 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-31 04:49:51.442911 | orchestrator | Tuesday 31 March 2026 04:49:49 +0000 (0:00:00.161) 0:15:22.244 ********* 2026-03-31 04:49:51.442922 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:51.442933 | orchestrator | 2026-03-31 04:49:51.442943 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-31 04:49:51.442954 | orchestrator | Tuesday 31 March 2026 04:49:49 +0000 (0:00:00.157) 0:15:22.402 ********* 2026-03-31 04:49:51.442965 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:51.442976 | orchestrator | 2026-03-31 04:49:51.442987 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-31 04:49:51.442998 | orchestrator | Tuesday 31 March 2026 04:49:49 +0000 (0:00:00.156) 0:15:22.559 ********* 2026-03-31 04:49:51.443008 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:49:51.443019 | orchestrator | 2026-03-31 04:49:51.443030 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-31 04:49:51.443041 | orchestrator | Tuesday 31 March 2026 04:49:50 +0000 (0:00:00.288) 0:15:22.847 ********* 2026-03-31 04:49:51.443052 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-31 04:49:51.443063 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-31 04:49:51.443073 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-31 04:49:51.443084 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:51.443095 | orchestrator | 2026-03-31 04:49:51.443106 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-31 04:49:51.443117 | orchestrator | Tuesday 31 March 2026 04:49:50 +0000 (0:00:00.457) 0:15:23.305 ********* 2026-03-31 04:49:51.443129 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-31 04:49:51.443140 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-31 04:49:51.443151 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-31 04:49:51.443162 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:49:51.443221 | orchestrator | 2026-03-31 04:49:51.443235 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-31 04:49:51.443245 | orchestrator | Tuesday 31 March 2026 04:49:51 +0000 (0:00:00.407) 0:15:23.712 ********* 2026-03-31 04:49:51.443256 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-31 04:49:51.443267 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-31 04:49:51.443287 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-31 04:50:16.324724 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:50:16.324842 | orchestrator | 2026-03-31 04:50:16.324858 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-31 04:50:16.324872 | orchestrator | Tuesday 31 March 2026 04:49:51 +0000 (0:00:00.395) 0:15:24.108 ********* 2026-03-31 04:50:16.324884 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:50:16.324897 | orchestrator | 2026-03-31 04:50:16.324908 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-31 04:50:16.324920 | orchestrator | Tuesday 31 March 2026 04:49:51 +0000 (0:00:00.161) 0:15:24.270 ********* 2026-03-31 04:50:16.324931 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-31 04:50:16.324942 | orchestrator | 2026-03-31 04:50:16.324953 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-31 04:50:16.324964 | orchestrator | Tuesday 31 March 2026 04:49:51 +0000 (0:00:00.408) 0:15:24.678 ********* 2026-03-31 04:50:16.324975 | orchestrator | changed: [testbed-node-3] 2026-03-31 04:50:16.324986 | orchestrator | 2026-03-31 04:50:16.324998 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-31 04:50:16.325009 | orchestrator | Tuesday 31 March 2026 04:49:53 +0000 (0:00:01.471) 0:15:26.149 ********* 2026-03-31 04:50:16.325020 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:50:16.325031 | orchestrator | 2026-03-31 04:50:16.325042 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-31 04:50:16.325054 | orchestrator | Tuesday 31 March 2026 04:49:53 +0000 (0:00:00.145) 0:15:26.295 ********* 2026-03-31 04:50:16.325065 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:50:16.325076 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:50:16.325088 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:50:16.325099 | orchestrator | 2026-03-31 04:50:16.325110 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-31 04:50:16.325121 | orchestrator | Tuesday 31 March 2026 04:49:54 +0000 (0:00:00.653) 0:15:26.949 ********* 2026-03-31 04:50:16.325132 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3 2026-03-31 04:50:16.325143 | orchestrator | 2026-03-31 04:50:16.325154 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-31 04:50:16.325165 | orchestrator | Tuesday 31 March 2026 04:49:54 +0000 (0:00:00.219) 0:15:27.168 ********* 2026-03-31 04:50:16.325176 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:50:16.325187 | orchestrator | 2026-03-31 04:50:16.325252 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-31 04:50:16.325284 | orchestrator | Tuesday 31 March 2026 04:49:54 +0000 (0:00:00.122) 0:15:27.290 ********* 2026-03-31 04:50:16.325297 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:50:16.325310 | orchestrator | 2026-03-31 04:50:16.325322 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-31 04:50:16.325335 | orchestrator | Tuesday 31 March 2026 04:49:54 +0000 (0:00:00.132) 0:15:27.423 ********* 2026-03-31 04:50:16.325348 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:50:16.325360 | orchestrator | 2026-03-31 04:50:16.325373 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-31 04:50:16.325385 | orchestrator | Tuesday 31 March 2026 04:49:55 +0000 (0:00:00.485) 0:15:27.909 ********* 2026-03-31 04:50:16.325398 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:50:16.325432 | orchestrator | 2026-03-31 04:50:16.325445 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-31 04:50:16.325459 | orchestrator | Tuesday 31 March 2026 04:49:55 +0000 (0:00:00.158) 0:15:28.068 ********* 2026-03-31 04:50:16.325471 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-31 04:50:16.325485 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-31 04:50:16.325499 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-31 04:50:16.325511 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-31 04:50:16.325524 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-31 04:50:16.325536 | orchestrator | 2026-03-31 04:50:16.325548 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-31 04:50:16.325561 | orchestrator | Tuesday 31 March 2026 04:49:57 +0000 (0:00:01.969) 0:15:30.037 ********* 2026-03-31 04:50:16.325573 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:50:16.325586 | orchestrator | 2026-03-31 04:50:16.325599 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-31 04:50:16.325611 | orchestrator | Tuesday 31 March 2026 04:49:57 +0000 (0:00:00.128) 0:15:30.166 ********* 2026-03-31 04:50:16.325624 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3 2026-03-31 04:50:16.325635 | orchestrator | 2026-03-31 04:50:16.325646 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-31 04:50:16.325657 | orchestrator | Tuesday 31 March 2026 04:49:57 +0000 (0:00:00.450) 0:15:30.617 ********* 2026-03-31 04:50:16.325668 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-31 04:50:16.325679 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-03-31 04:50:16.325690 | orchestrator | 2026-03-31 04:50:16.325701 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-31 04:50:16.325712 | orchestrator | Tuesday 31 March 2026 04:49:58 +0000 (0:00:00.824) 0:15:31.441 ********* 2026-03-31 04:50:16.325722 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 04:50:16.325733 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-31 04:50:16.325745 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-31 04:50:16.325755 | orchestrator | 2026-03-31 04:50:16.325783 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-31 04:50:16.325795 | orchestrator | Tuesday 31 March 2026 04:50:00 +0000 (0:00:02.044) 0:15:33.486 ********* 2026-03-31 04:50:16.325806 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-03-31 04:50:16.325817 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-31 04:50:16.325828 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:50:16.325839 | orchestrator | 2026-03-31 04:50:16.325850 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-31 04:50:16.325861 | orchestrator | Tuesday 31 March 2026 04:50:01 +0000 (0:00:01.082) 0:15:34.568 ********* 2026-03-31 04:50:16.325872 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:50:16.325883 | orchestrator | 2026-03-31 04:50:16.325894 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-31 04:50:16.325905 | orchestrator | Tuesday 31 March 2026 04:50:02 +0000 (0:00:00.241) 0:15:34.809 ********* 2026-03-31 04:50:16.325915 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:50:16.325926 | orchestrator | 2026-03-31 04:50:16.325937 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-31 04:50:16.325949 | orchestrator | Tuesday 31 March 2026 04:50:02 +0000 (0:00:00.144) 0:15:34.954 ********* 2026-03-31 04:50:16.325959 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:50:16.325970 | orchestrator | 2026-03-31 04:50:16.325981 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-31 04:50:16.325992 | orchestrator | Tuesday 31 March 2026 04:50:02 +0000 (0:00:00.141) 0:15:35.096 ********* 2026-03-31 04:50:16.326011 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3 2026-03-31 04:50:16.326083 | orchestrator | 2026-03-31 04:50:16.326095 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-31 04:50:16.326106 | orchestrator | Tuesday 31 March 2026 04:50:02 +0000 (0:00:00.219) 0:15:35.315 ********* 2026-03-31 04:50:16.326117 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:50:16.326128 | orchestrator | 2026-03-31 04:50:16.326138 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-31 04:50:16.326149 | orchestrator | Tuesday 31 March 2026 04:50:03 +0000 (0:00:00.476) 0:15:35.792 ********* 2026-03-31 04:50:16.326160 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:50:16.326171 | orchestrator | 2026-03-31 04:50:16.326182 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-31 04:50:16.326193 | orchestrator | Tuesday 31 March 2026 04:50:05 +0000 (0:00:02.506) 0:15:38.299 ********* 2026-03-31 04:50:16.326223 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3 2026-03-31 04:50:16.326234 | orchestrator | 2026-03-31 04:50:16.326245 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-31 04:50:16.326262 | orchestrator | Tuesday 31 March 2026 04:50:05 +0000 (0:00:00.213) 0:15:38.512 ********* 2026-03-31 04:50:16.326273 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:50:16.326284 | orchestrator | 2026-03-31 04:50:16.326295 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-31 04:50:16.326306 | orchestrator | Tuesday 31 March 2026 04:50:07 +0000 (0:00:01.194) 0:15:39.707 ********* 2026-03-31 04:50:16.326317 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:50:16.326328 | orchestrator | 2026-03-31 04:50:16.326339 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-31 04:50:16.326350 | orchestrator | Tuesday 31 March 2026 04:50:07 +0000 (0:00:00.904) 0:15:40.612 ********* 2026-03-31 04:50:16.326361 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:50:16.326372 | orchestrator | 2026-03-31 04:50:16.326383 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-31 04:50:16.326394 | orchestrator | Tuesday 31 March 2026 04:50:09 +0000 (0:00:01.164) 0:15:41.776 ********* 2026-03-31 04:50:16.326405 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:50:16.326416 | orchestrator | 2026-03-31 04:50:16.326427 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-31 04:50:16.326438 | orchestrator | Tuesday 31 March 2026 04:50:09 +0000 (0:00:00.130) 0:15:41.907 ********* 2026-03-31 04:50:16.326449 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:50:16.326460 | orchestrator | 2026-03-31 04:50:16.326471 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-31 04:50:16.326482 | orchestrator | Tuesday 31 March 2026 04:50:09 +0000 (0:00:00.134) 0:15:42.041 ********* 2026-03-31 04:50:16.326493 | orchestrator | ok: [testbed-node-3] => (item=2) 2026-03-31 04:50:16.326504 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-03-31 04:50:16.326515 | orchestrator | 2026-03-31 04:50:16.326526 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-31 04:50:16.326537 | orchestrator | Tuesday 31 March 2026 04:50:10 +0000 (0:00:00.783) 0:15:42.825 ********* 2026-03-31 04:50:16.326548 | orchestrator | ok: [testbed-node-3] => (item=2) 2026-03-31 04:50:16.326559 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-03-31 04:50:16.326570 | orchestrator | 2026-03-31 04:50:16.326581 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-31 04:50:16.326592 | orchestrator | Tuesday 31 March 2026 04:50:11 +0000 (0:00:01.799) 0:15:44.625 ********* 2026-03-31 04:50:16.326603 | orchestrator | changed: [testbed-node-3] => (item=2) 2026-03-31 04:50:16.326615 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-03-31 04:50:16.326626 | orchestrator | 2026-03-31 04:50:16.326637 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-31 04:50:16.326648 | orchestrator | Tuesday 31 March 2026 04:50:15 +0000 (0:00:03.631) 0:15:48.256 ********* 2026-03-31 04:50:16.326666 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:50:16.326677 | orchestrator | 2026-03-31 04:50:16.326688 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-31 04:50:16.326699 | orchestrator | Tuesday 31 March 2026 04:50:15 +0000 (0:00:00.236) 0:15:48.493 ********* 2026-03-31 04:50:16.326710 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:50:16.326721 | orchestrator | 2026-03-31 04:50:16.326732 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-31 04:50:16.326743 | orchestrator | Tuesday 31 March 2026 04:50:16 +0000 (0:00:00.206) 0:15:48.699 ********* 2026-03-31 04:50:16.326754 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:50:16.326765 | orchestrator | 2026-03-31 04:50:16.326784 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-03-31 04:50:43.988109 | orchestrator | Tuesday 31 March 2026 04:50:16 +0000 (0:00:00.293) 0:15:48.992 ********* 2026-03-31 04:50:43.988299 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:50:43.988321 | orchestrator | 2026-03-31 04:50:43.988335 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-03-31 04:50:43.988348 | orchestrator | Tuesday 31 March 2026 04:50:16 +0000 (0:00:00.127) 0:15:49.120 ********* 2026-03-31 04:50:43.988359 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:50:43.988370 | orchestrator | 2026-03-31 04:50:43.988381 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-03-31 04:50:43.988393 | orchestrator | Tuesday 31 March 2026 04:50:16 +0000 (0:00:00.409) 0:15:49.529 ********* 2026-03-31 04:50:43.988404 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-03-31 04:50:43.988416 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-03-31 04:50:43.988427 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (598 retries left). 2026-03-31 04:50:43.988438 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (597 retries left). 2026-03-31 04:50:43.988449 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (596 retries left). 2026-03-31 04:50:43.988460 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (595 retries left). 2026-03-31 04:50:43.988472 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-31 04:50:43.988484 | orchestrator | 2026-03-31 04:50:43.988495 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-31 04:50:43.988506 | orchestrator | Tuesday 31 March 2026 04:50:35 +0000 (0:00:18.881) 0:16:08.411 ********* 2026-03-31 04:50:43.988517 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:50:43.988528 | orchestrator | 2026-03-31 04:50:43.988540 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-31 04:50:43.988551 | orchestrator | Tuesday 31 March 2026 04:50:35 +0000 (0:00:00.116) 0:16:08.527 ********* 2026-03-31 04:50:43.988562 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:50:43.988573 | orchestrator | 2026-03-31 04:50:43.988585 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-31 04:50:43.988597 | orchestrator | Tuesday 31 March 2026 04:50:35 +0000 (0:00:00.142) 0:16:08.670 ********* 2026-03-31 04:50:43.988628 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:50:43.988641 | orchestrator | 2026-03-31 04:50:43.988654 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-31 04:50:43.988667 | orchestrator | Tuesday 31 March 2026 04:50:36 +0000 (0:00:00.132) 0:16:08.803 ********* 2026-03-31 04:50:43.988680 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:50:43.988693 | orchestrator | 2026-03-31 04:50:43.988706 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-31 04:50:43.988718 | orchestrator | Tuesday 31 March 2026 04:50:36 +0000 (0:00:00.139) 0:16:08.942 ********* 2026-03-31 04:50:43.988752 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:50:43.988765 | orchestrator | 2026-03-31 04:50:43.988778 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-31 04:50:43.988790 | orchestrator | Tuesday 31 March 2026 04:50:36 +0000 (0:00:00.118) 0:16:09.060 ********* 2026-03-31 04:50:43.988803 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:50:43.988815 | orchestrator | 2026-03-31 04:50:43.988828 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-31 04:50:43.988840 | orchestrator | Tuesday 31 March 2026 04:50:36 +0000 (0:00:00.119) 0:16:09.180 ********* 2026-03-31 04:50:43.988853 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:50:43.988865 | orchestrator | 2026-03-31 04:50:43.988878 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-03-31 04:50:43.988891 | orchestrator | 2026-03-31 04:50:43.988904 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-31 04:50:43.988917 | orchestrator | Tuesday 31 March 2026 04:50:36 +0000 (0:00:00.211) 0:16:09.392 ********* 2026-03-31 04:50:43.988930 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-03-31 04:50:43.988942 | orchestrator | 2026-03-31 04:50:43.988953 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-31 04:50:43.988964 | orchestrator | Tuesday 31 March 2026 04:50:36 +0000 (0:00:00.228) 0:16:09.621 ********* 2026-03-31 04:50:43.988976 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:50:43.988987 | orchestrator | 2026-03-31 04:50:43.988998 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-31 04:50:43.989009 | orchestrator | Tuesday 31 March 2026 04:50:37 +0000 (0:00:00.699) 0:16:10.320 ********* 2026-03-31 04:50:43.989020 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:50:43.989031 | orchestrator | 2026-03-31 04:50:43.989042 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-31 04:50:43.989053 | orchestrator | Tuesday 31 March 2026 04:50:37 +0000 (0:00:00.126) 0:16:10.446 ********* 2026-03-31 04:50:43.989064 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:50:43.989075 | orchestrator | 2026-03-31 04:50:43.989086 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-31 04:50:43.989103 | orchestrator | Tuesday 31 March 2026 04:50:38 +0000 (0:00:00.450) 0:16:10.897 ********* 2026-03-31 04:50:43.989121 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:50:43.989140 | orchestrator | 2026-03-31 04:50:43.989160 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-31 04:50:43.989174 | orchestrator | Tuesday 31 March 2026 04:50:38 +0000 (0:00:00.156) 0:16:11.054 ********* 2026-03-31 04:50:43.989185 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:50:43.989196 | orchestrator | 2026-03-31 04:50:43.989207 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-31 04:50:43.989267 | orchestrator | Tuesday 31 March 2026 04:50:38 +0000 (0:00:00.162) 0:16:11.216 ********* 2026-03-31 04:50:43.989290 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:50:43.989308 | orchestrator | 2026-03-31 04:50:43.989325 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-31 04:50:43.989337 | orchestrator | Tuesday 31 March 2026 04:50:38 +0000 (0:00:00.158) 0:16:11.374 ********* 2026-03-31 04:50:43.989348 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:50:43.989359 | orchestrator | 2026-03-31 04:50:43.989370 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-31 04:50:43.989381 | orchestrator | Tuesday 31 March 2026 04:50:38 +0000 (0:00:00.142) 0:16:11.517 ********* 2026-03-31 04:50:43.989392 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:50:43.989403 | orchestrator | 2026-03-31 04:50:43.989413 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-31 04:50:43.989424 | orchestrator | Tuesday 31 March 2026 04:50:38 +0000 (0:00:00.136) 0:16:11.653 ********* 2026-03-31 04:50:43.989435 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:50:43.989457 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:50:43.989468 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:50:43.989479 | orchestrator | 2026-03-31 04:50:43.989490 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-31 04:50:43.989501 | orchestrator | Tuesday 31 March 2026 04:50:39 +0000 (0:00:00.971) 0:16:12.625 ********* 2026-03-31 04:50:43.989512 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:50:43.989522 | orchestrator | 2026-03-31 04:50:43.989533 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-31 04:50:43.989544 | orchestrator | Tuesday 31 March 2026 04:50:40 +0000 (0:00:00.303) 0:16:12.928 ********* 2026-03-31 04:50:43.989555 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:50:43.989566 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:50:43.989577 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:50:43.989587 | orchestrator | 2026-03-31 04:50:43.989598 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-31 04:50:43.989609 | orchestrator | Tuesday 31 March 2026 04:50:42 +0000 (0:00:02.161) 0:16:15.090 ********* 2026-03-31 04:50:43.989620 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-31 04:50:43.989632 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-31 04:50:43.989649 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-31 04:50:43.989661 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:50:43.989672 | orchestrator | 2026-03-31 04:50:43.989682 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-31 04:50:43.989693 | orchestrator | Tuesday 31 March 2026 04:50:42 +0000 (0:00:00.431) 0:16:15.522 ********* 2026-03-31 04:50:43.989725 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-31 04:50:43.989740 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-31 04:50:43.989752 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-31 04:50:43.989763 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:50:43.989774 | orchestrator | 2026-03-31 04:50:43.989785 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-31 04:50:43.989796 | orchestrator | Tuesday 31 March 2026 04:50:43 +0000 (0:00:00.964) 0:16:16.486 ********* 2026-03-31 04:50:43.989809 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 04:50:43.989824 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 04:50:43.989845 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 04:50:48.313745 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:50:48.313846 | orchestrator | 2026-03-31 04:50:48.313862 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-31 04:50:48.313876 | orchestrator | Tuesday 31 March 2026 04:50:43 +0000 (0:00:00.169) 0:16:16.655 ********* 2026-03-31 04:50:48.313892 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '2a470704af4f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-31 04:50:40.773994', 'end': '2026-03-31 04:50:40.818194', 'delta': '0:00:00.044200', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2a470704af4f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-31 04:50:48.313925 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '72281537ffe8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-31 04:50:41.610651', 'end': '2026-03-31 04:50:41.668927', 'delta': '0:00:00.058276', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['72281537ffe8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-31 04:50:48.313938 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '4f3969f3506a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-31 04:50:42.215852', 'end': '2026-03-31 04:50:42.264605', 'delta': '0:00:00.048753', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['4f3969f3506a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-31 04:50:48.313950 | orchestrator | 2026-03-31 04:50:48.313961 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-31 04:50:48.313972 | orchestrator | Tuesday 31 March 2026 04:50:44 +0000 (0:00:00.510) 0:16:17.166 ********* 2026-03-31 04:50:48.313983 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:50:48.313995 | orchestrator | 2026-03-31 04:50:48.314006 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-31 04:50:48.314089 | orchestrator | Tuesday 31 March 2026 04:50:44 +0000 (0:00:00.261) 0:16:17.427 ********* 2026-03-31 04:50:48.314102 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:50:48.314114 | orchestrator | 2026-03-31 04:50:48.314125 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-31 04:50:48.314136 | orchestrator | Tuesday 31 March 2026 04:50:45 +0000 (0:00:00.275) 0:16:17.702 ********* 2026-03-31 04:50:48.314148 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:50:48.314159 | orchestrator | 2026-03-31 04:50:48.314170 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-31 04:50:48.314205 | orchestrator | Tuesday 31 March 2026 04:50:45 +0000 (0:00:00.145) 0:16:17.848 ********* 2026-03-31 04:50:48.314217 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-31 04:50:48.314228 | orchestrator | 2026-03-31 04:50:48.314265 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-31 04:50:48.314276 | orchestrator | Tuesday 31 March 2026 04:50:46 +0000 (0:00:01.003) 0:16:18.852 ********* 2026-03-31 04:50:48.314289 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:50:48.314302 | orchestrator | 2026-03-31 04:50:48.314314 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-31 04:50:48.314328 | orchestrator | Tuesday 31 March 2026 04:50:46 +0000 (0:00:00.144) 0:16:18.997 ********* 2026-03-31 04:50:48.314340 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:50:48.314353 | orchestrator | 2026-03-31 04:50:48.314365 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-31 04:50:48.314378 | orchestrator | Tuesday 31 March 2026 04:50:46 +0000 (0:00:00.130) 0:16:19.127 ********* 2026-03-31 04:50:48.314390 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:50:48.314403 | orchestrator | 2026-03-31 04:50:48.314416 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-31 04:50:48.314429 | orchestrator | Tuesday 31 March 2026 04:50:46 +0000 (0:00:00.256) 0:16:19.384 ********* 2026-03-31 04:50:48.314441 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:50:48.314454 | orchestrator | 2026-03-31 04:50:48.314485 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-31 04:50:48.314497 | orchestrator | Tuesday 31 March 2026 04:50:46 +0000 (0:00:00.128) 0:16:19.513 ********* 2026-03-31 04:50:48.314509 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:50:48.314520 | orchestrator | 2026-03-31 04:50:48.314531 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-31 04:50:48.314542 | orchestrator | Tuesday 31 March 2026 04:50:46 +0000 (0:00:00.126) 0:16:19.639 ********* 2026-03-31 04:50:48.314554 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:50:48.314565 | orchestrator | 2026-03-31 04:50:48.314576 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-31 04:50:48.314587 | orchestrator | Tuesday 31 March 2026 04:50:47 +0000 (0:00:00.164) 0:16:19.803 ********* 2026-03-31 04:50:48.314598 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:50:48.314609 | orchestrator | 2026-03-31 04:50:48.314620 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-31 04:50:48.314631 | orchestrator | Tuesday 31 March 2026 04:50:47 +0000 (0:00:00.152) 0:16:19.955 ********* 2026-03-31 04:50:48.314642 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:50:48.314653 | orchestrator | 2026-03-31 04:50:48.314664 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-31 04:50:48.314675 | orchestrator | Tuesday 31 March 2026 04:50:47 +0000 (0:00:00.178) 0:16:20.134 ********* 2026-03-31 04:50:48.314686 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:50:48.314697 | orchestrator | 2026-03-31 04:50:48.314708 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-31 04:50:48.314720 | orchestrator | Tuesday 31 March 2026 04:50:47 +0000 (0:00:00.455) 0:16:20.589 ********* 2026-03-31 04:50:48.314731 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:50:48.314742 | orchestrator | 2026-03-31 04:50:48.314753 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-31 04:50:48.314764 | orchestrator | Tuesday 31 March 2026 04:50:48 +0000 (0:00:00.183) 0:16:20.773 ********* 2026-03-31 04:50:48.314783 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:50:48.314807 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--da0b55d5--13d5--528b--aee2--5667f342587c-osd--block--da0b55d5--13d5--528b--aee2--5667f342587c', 'dm-uuid-LVM-voIvMScBNf0nn1UqP6J3mrL57Feo8hpsEfbBIXBLL2lbnvB5fpXdf3Vs7Oc4nA8j'], 'uuids': ['26974dbf-f0a7-4ca8-8b18-f9eb0862be76'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'aca90cda', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['EfbBIX-BLL2-lbnv-B5fp-Xdf3-Vs7O-c4nA8j']}})  2026-03-31 04:50:48.314820 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a64e844-a251-4ee7-a817-d55da64d6351', 'scsi-SQEMU_QEMU_HARDDISK_5a64e844-a251-4ee7-a817-d55da64d6351'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5a64e844', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-31 04:50:48.314833 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-jppFpT-6287-H5UX-wadw-idvL-aDwi-H3fsQH', 'scsi-0QEMU_QEMU_HARDDISK_627ac388-afe2-405e-bfb6-93a96eeb5247', 'scsi-SQEMU_QEMU_HARDDISK_627ac388-afe2-405e-bfb6-93a96eeb5247'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '627ac388', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb-osd--block--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb']}})  2026-03-31 04:50:48.314854 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:50:48.648904 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:50:48.649063 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-31-01-38-47-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-31 04:50:48.649114 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:50:48.649171 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-jN9Ywl-XbnL-hNii-unic-nne9-TiGA-xFnCN2', 'dm-uuid-CRYPT-LUKS2-c911a2b9ffbe4994aafa7327c1153c91-jN9Ywl-XbnL-hNii-unic-nne9-TiGA-xFnCN2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-31 04:50:48.649193 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:50:48.649215 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb-osd--block--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb', 'dm-uuid-LVM-RwD1SDPPywNrcOLsCdJUWJCkPqisEw7IjN9YwlXbnLhNiiunicnne9TiGAxFnCN2'], 'uuids': ['c911a2b9-ffbe-4994-aafa-7327c1153c91'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '627ac388', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['jN9Ywl-XbnL-hNii-unic-nne9-TiGA-xFnCN2']}})  2026-03-31 04:50:48.649270 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-pfZnnD-Ultt-g92I-R3gj-okuR-Ezub-rBAf3f', 'scsi-0QEMU_QEMU_HARDDISK_aca90cda-810a-4a3a-a8a4-a9246b552814', 'scsi-SQEMU_QEMU_HARDDISK_aca90cda-810a-4a3a-a8a4-a9246b552814'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'aca90cda', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--da0b55d5--13d5--528b--aee2--5667f342587c-osd--block--da0b55d5--13d5--528b--aee2--5667f342587c']}})  2026-03-31 04:50:48.649316 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:50:48.649350 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9459331e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part16', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part14', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part15', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part1', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-31 04:50:48.649379 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:50:48.649392 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:50:48.649404 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-EfbBIX-BLL2-lbnv-B5fp-Xdf3-Vs7O-c4nA8j', 'dm-uuid-CRYPT-LUKS2-26974dbff0a74ca88b18f9eb0862be76-EfbBIX-BLL2-lbnv-B5fp-Xdf3-Vs7O-c4nA8j'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-31 04:50:48.649418 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:50:48.649439 | orchestrator | 2026-03-31 04:50:48.649459 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-31 04:50:48.649480 | orchestrator | Tuesday 31 March 2026 04:50:48 +0000 (0:00:00.341) 0:16:21.115 ********* 2026-03-31 04:50:48.649516 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:50:48.824601 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--da0b55d5--13d5--528b--aee2--5667f342587c-osd--block--da0b55d5--13d5--528b--aee2--5667f342587c', 'dm-uuid-LVM-voIvMScBNf0nn1UqP6J3mrL57Feo8hpsEfbBIXBLL2lbnvB5fpXdf3Vs7Oc4nA8j'], 'uuids': ['26974dbf-f0a7-4ca8-8b18-f9eb0862be76'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'aca90cda', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['EfbBIX-BLL2-lbnv-B5fp-Xdf3-Vs7O-c4nA8j']}}, 'ansible_loop_var': 'item'})  2026-03-31 04:50:48.824780 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a64e844-a251-4ee7-a817-d55da64d6351', 'scsi-SQEMU_QEMU_HARDDISK_5a64e844-a251-4ee7-a817-d55da64d6351'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5a64e844', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:50:48.824800 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-jppFpT-6287-H5UX-wadw-idvL-aDwi-H3fsQH', 'scsi-0QEMU_QEMU_HARDDISK_627ac388-afe2-405e-bfb6-93a96eeb5247', 'scsi-SQEMU_QEMU_HARDDISK_627ac388-afe2-405e-bfb6-93a96eeb5247'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '627ac388', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb-osd--block--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb']}}, 'ansible_loop_var': 'item'})  2026-03-31 04:50:48.824817 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:50:48.824829 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:50:48.824870 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-31-01-38-47-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:50:48.824910 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:50:48.824932 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-jN9Ywl-XbnL-hNii-unic-nne9-TiGA-xFnCN2', 'dm-uuid-CRYPT-LUKS2-c911a2b9ffbe4994aafa7327c1153c91-jN9Ywl-XbnL-hNii-unic-nne9-TiGA-xFnCN2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:50:48.824951 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:50:48.824974 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb-osd--block--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb', 'dm-uuid-LVM-RwD1SDPPywNrcOLsCdJUWJCkPqisEw7IjN9YwlXbnLhNiiunicnne9TiGAxFnCN2'], 'uuids': ['c911a2b9-ffbe-4994-aafa-7327c1153c91'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '627ac388', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['jN9Ywl-XbnL-hNii-unic-nne9-TiGA-xFnCN2']}}, 'ansible_loop_var': 'item'})  2026-03-31 04:50:48.825008 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-pfZnnD-Ultt-g92I-R3gj-okuR-Ezub-rBAf3f', 'scsi-0QEMU_QEMU_HARDDISK_aca90cda-810a-4a3a-a8a4-a9246b552814', 'scsi-SQEMU_QEMU_HARDDISK_aca90cda-810a-4a3a-a8a4-a9246b552814'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'aca90cda', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--da0b55d5--13d5--528b--aee2--5667f342587c-osd--block--da0b55d5--13d5--528b--aee2--5667f342587c']}}, 'ansible_loop_var': 'item'})  2026-03-31 04:50:53.380645 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:50:53.380766 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9459331e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part16', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part14', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part15', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part1', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:50:53.380787 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:50:53.380817 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:50:53.380862 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-EfbBIX-BLL2-lbnv-B5fp-Xdf3-Vs7O-c4nA8j', 'dm-uuid-CRYPT-LUKS2-26974dbff0a74ca88b18f9eb0862be76-EfbBIX-BLL2-lbnv-B5fp-Xdf3-Vs7O-c4nA8j'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:50:53.380877 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:50:53.380891 | orchestrator | 2026-03-31 04:50:53.380903 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-31 04:50:53.380916 | orchestrator | Tuesday 31 March 2026 04:50:48 +0000 (0:00:00.384) 0:16:21.500 ********* 2026-03-31 04:50:53.380927 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:50:53.380938 | orchestrator | 2026-03-31 04:50:53.380949 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-31 04:50:53.380961 | orchestrator | Tuesday 31 March 2026 04:50:49 +0000 (0:00:00.509) 0:16:22.009 ********* 2026-03-31 04:50:53.380971 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:50:53.380982 | orchestrator | 2026-03-31 04:50:53.380993 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-31 04:50:53.381004 | orchestrator | Tuesday 31 March 2026 04:50:49 +0000 (0:00:00.156) 0:16:22.165 ********* 2026-03-31 04:50:53.381015 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:50:53.381025 | orchestrator | 2026-03-31 04:50:53.381036 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-31 04:50:53.381047 | orchestrator | Tuesday 31 March 2026 04:50:49 +0000 (0:00:00.490) 0:16:22.656 ********* 2026-03-31 04:50:53.381058 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:50:53.381069 | orchestrator | 2026-03-31 04:50:53.381079 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-31 04:50:53.381090 | orchestrator | Tuesday 31 March 2026 04:50:50 +0000 (0:00:00.146) 0:16:22.803 ********* 2026-03-31 04:50:53.381101 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:50:53.381112 | orchestrator | 2026-03-31 04:50:53.381123 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-31 04:50:53.381134 | orchestrator | Tuesday 31 March 2026 04:50:50 +0000 (0:00:00.244) 0:16:23.047 ********* 2026-03-31 04:50:53.381144 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:50:53.381155 | orchestrator | 2026-03-31 04:50:53.381166 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-31 04:50:53.381177 | orchestrator | Tuesday 31 March 2026 04:50:50 +0000 (0:00:00.154) 0:16:23.202 ********* 2026-03-31 04:50:53.381188 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-31 04:50:53.381199 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-31 04:50:53.381210 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-31 04:50:53.381221 | orchestrator | 2026-03-31 04:50:53.381232 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-31 04:50:53.381268 | orchestrator | Tuesday 31 March 2026 04:50:51 +0000 (0:00:01.012) 0:16:24.214 ********* 2026-03-31 04:50:53.381280 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-31 04:50:53.381300 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-31 04:50:53.381310 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-31 04:50:53.381322 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:50:53.381332 | orchestrator | 2026-03-31 04:50:53.381343 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-31 04:50:53.381354 | orchestrator | Tuesday 31 March 2026 04:50:51 +0000 (0:00:00.168) 0:16:24.383 ********* 2026-03-31 04:50:53.381365 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-03-31 04:50:53.381377 | orchestrator | 2026-03-31 04:50:53.381388 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-31 04:50:53.381400 | orchestrator | Tuesday 31 March 2026 04:50:52 +0000 (0:00:00.529) 0:16:24.913 ********* 2026-03-31 04:50:53.381412 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:50:53.381423 | orchestrator | 2026-03-31 04:50:53.381434 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-31 04:50:53.381445 | orchestrator | Tuesday 31 March 2026 04:50:52 +0000 (0:00:00.150) 0:16:25.063 ********* 2026-03-31 04:50:53.381456 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:50:53.381466 | orchestrator | 2026-03-31 04:50:53.381477 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-31 04:50:53.381488 | orchestrator | Tuesday 31 March 2026 04:50:52 +0000 (0:00:00.149) 0:16:25.212 ********* 2026-03-31 04:50:53.381499 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:50:53.381510 | orchestrator | 2026-03-31 04:50:53.381521 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-31 04:50:53.381531 | orchestrator | Tuesday 31 March 2026 04:50:52 +0000 (0:00:00.193) 0:16:25.406 ********* 2026-03-31 04:50:53.381542 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:50:53.381553 | orchestrator | 2026-03-31 04:50:53.381564 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-31 04:50:53.381575 | orchestrator | Tuesday 31 March 2026 04:50:52 +0000 (0:00:00.271) 0:16:25.677 ********* 2026-03-31 04:50:53.381593 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-31 04:51:07.751822 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-31 04:51:07.751941 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-31 04:51:07.751958 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:07.751970 | orchestrator | 2026-03-31 04:51:07.751984 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-31 04:51:07.751997 | orchestrator | Tuesday 31 March 2026 04:50:53 +0000 (0:00:00.377) 0:16:26.055 ********* 2026-03-31 04:51:07.752009 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-31 04:51:07.752036 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-31 04:51:07.752049 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-31 04:51:07.752060 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:07.752071 | orchestrator | 2026-03-31 04:51:07.752083 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-31 04:51:07.752094 | orchestrator | Tuesday 31 March 2026 04:50:53 +0000 (0:00:00.416) 0:16:26.472 ********* 2026-03-31 04:51:07.752105 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-31 04:51:07.752117 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-31 04:51:07.752129 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-31 04:51:07.752140 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:07.752151 | orchestrator | 2026-03-31 04:51:07.752162 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-31 04:51:07.752174 | orchestrator | Tuesday 31 March 2026 04:50:54 +0000 (0:00:00.418) 0:16:26.890 ********* 2026-03-31 04:51:07.752185 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:51:07.752218 | orchestrator | 2026-03-31 04:51:07.752230 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-31 04:51:07.752241 | orchestrator | Tuesday 31 March 2026 04:50:54 +0000 (0:00:00.157) 0:16:27.048 ********* 2026-03-31 04:51:07.752294 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-31 04:51:07.752306 | orchestrator | 2026-03-31 04:51:07.752317 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-31 04:51:07.752329 | orchestrator | Tuesday 31 March 2026 04:50:54 +0000 (0:00:00.344) 0:16:27.392 ********* 2026-03-31 04:51:07.752340 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:51:07.752352 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:51:07.752365 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:51:07.752378 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-31 04:51:07.752391 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-03-31 04:51:07.752404 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-31 04:51:07.752416 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-31 04:51:07.752429 | orchestrator | 2026-03-31 04:51:07.752442 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-31 04:51:07.752455 | orchestrator | Tuesday 31 March 2026 04:50:55 +0000 (0:00:01.160) 0:16:28.553 ********* 2026-03-31 04:51:07.752468 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:51:07.752481 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:51:07.752494 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:51:07.752507 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-31 04:51:07.752520 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-03-31 04:51:07.752533 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-31 04:51:07.752545 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-31 04:51:07.752558 | orchestrator | 2026-03-31 04:51:07.752572 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-03-31 04:51:07.752584 | orchestrator | Tuesday 31 March 2026 04:50:57 +0000 (0:00:01.721) 0:16:30.274 ********* 2026-03-31 04:51:07.752597 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:51:07.752609 | orchestrator | 2026-03-31 04:51:07.752629 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-03-31 04:51:07.752648 | orchestrator | Tuesday 31 March 2026 04:50:58 +0000 (0:00:00.756) 0:16:31.031 ********* 2026-03-31 04:51:07.752679 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:51:07.752700 | orchestrator | 2026-03-31 04:51:07.752719 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-03-31 04:51:07.752738 | orchestrator | Tuesday 31 March 2026 04:50:58 +0000 (0:00:00.139) 0:16:31.170 ********* 2026-03-31 04:51:07.752756 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:51:07.752776 | orchestrator | 2026-03-31 04:51:07.752794 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-03-31 04:51:07.752814 | orchestrator | Tuesday 31 March 2026 04:50:58 +0000 (0:00:00.252) 0:16:31.423 ********* 2026-03-31 04:51:07.752834 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-31 04:51:07.752855 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-03-31 04:51:07.752875 | orchestrator | 2026-03-31 04:51:07.752893 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-31 04:51:07.752905 | orchestrator | Tuesday 31 March 2026 04:51:01 +0000 (0:00:03.175) 0:16:34.599 ********* 2026-03-31 04:51:07.752916 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-03-31 04:51:07.752940 | orchestrator | 2026-03-31 04:51:07.752952 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-31 04:51:07.752984 | orchestrator | Tuesday 31 March 2026 04:51:02 +0000 (0:00:00.199) 0:16:34.799 ********* 2026-03-31 04:51:07.752996 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-03-31 04:51:07.753007 | orchestrator | 2026-03-31 04:51:07.753018 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-31 04:51:07.753029 | orchestrator | Tuesday 31 March 2026 04:51:02 +0000 (0:00:00.207) 0:16:35.006 ********* 2026-03-31 04:51:07.753040 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:07.753051 | orchestrator | 2026-03-31 04:51:07.753070 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-31 04:51:07.753082 | orchestrator | Tuesday 31 March 2026 04:51:02 +0000 (0:00:00.135) 0:16:35.142 ********* 2026-03-31 04:51:07.753093 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:51:07.753104 | orchestrator | 2026-03-31 04:51:07.753115 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-31 04:51:07.753126 | orchestrator | Tuesday 31 March 2026 04:51:02 +0000 (0:00:00.528) 0:16:35.670 ********* 2026-03-31 04:51:07.753137 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:51:07.753148 | orchestrator | 2026-03-31 04:51:07.753158 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-31 04:51:07.753169 | orchestrator | Tuesday 31 March 2026 04:51:03 +0000 (0:00:00.536) 0:16:36.206 ********* 2026-03-31 04:51:07.753180 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:51:07.753191 | orchestrator | 2026-03-31 04:51:07.753202 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-31 04:51:07.753213 | orchestrator | Tuesday 31 March 2026 04:51:04 +0000 (0:00:00.502) 0:16:36.709 ********* 2026-03-31 04:51:07.753224 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:07.753235 | orchestrator | 2026-03-31 04:51:07.753246 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-31 04:51:07.753283 | orchestrator | Tuesday 31 March 2026 04:51:04 +0000 (0:00:00.132) 0:16:36.841 ********* 2026-03-31 04:51:07.753294 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:07.753305 | orchestrator | 2026-03-31 04:51:07.753316 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-31 04:51:07.753327 | orchestrator | Tuesday 31 March 2026 04:51:04 +0000 (0:00:00.420) 0:16:37.261 ********* 2026-03-31 04:51:07.753338 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:07.753350 | orchestrator | 2026-03-31 04:51:07.753361 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-31 04:51:07.753371 | orchestrator | Tuesday 31 March 2026 04:51:04 +0000 (0:00:00.138) 0:16:37.400 ********* 2026-03-31 04:51:07.753382 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:51:07.753393 | orchestrator | 2026-03-31 04:51:07.753404 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-31 04:51:07.753415 | orchestrator | Tuesday 31 March 2026 04:51:05 +0000 (0:00:00.523) 0:16:37.923 ********* 2026-03-31 04:51:07.753426 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:51:07.753437 | orchestrator | 2026-03-31 04:51:07.753448 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-31 04:51:07.753458 | orchestrator | Tuesday 31 March 2026 04:51:05 +0000 (0:00:00.533) 0:16:38.457 ********* 2026-03-31 04:51:07.753469 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:07.753480 | orchestrator | 2026-03-31 04:51:07.753491 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-31 04:51:07.753502 | orchestrator | Tuesday 31 March 2026 04:51:05 +0000 (0:00:00.133) 0:16:38.591 ********* 2026-03-31 04:51:07.753513 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:07.753523 | orchestrator | 2026-03-31 04:51:07.753534 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-31 04:51:07.753545 | orchestrator | Tuesday 31 March 2026 04:51:06 +0000 (0:00:00.140) 0:16:38.731 ********* 2026-03-31 04:51:07.753564 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:51:07.753575 | orchestrator | 2026-03-31 04:51:07.753586 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-31 04:51:07.753596 | orchestrator | Tuesday 31 March 2026 04:51:06 +0000 (0:00:00.168) 0:16:38.900 ********* 2026-03-31 04:51:07.753607 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:51:07.753618 | orchestrator | 2026-03-31 04:51:07.753629 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-31 04:51:07.753640 | orchestrator | Tuesday 31 March 2026 04:51:06 +0000 (0:00:00.162) 0:16:39.062 ********* 2026-03-31 04:51:07.753651 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:51:07.753662 | orchestrator | 2026-03-31 04:51:07.753673 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-31 04:51:07.753683 | orchestrator | Tuesday 31 March 2026 04:51:06 +0000 (0:00:00.163) 0:16:39.225 ********* 2026-03-31 04:51:07.753694 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:07.753705 | orchestrator | 2026-03-31 04:51:07.753716 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-31 04:51:07.753727 | orchestrator | Tuesday 31 March 2026 04:51:06 +0000 (0:00:00.136) 0:16:39.362 ********* 2026-03-31 04:51:07.753738 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:07.753749 | orchestrator | 2026-03-31 04:51:07.753760 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-31 04:51:07.753770 | orchestrator | Tuesday 31 March 2026 04:51:06 +0000 (0:00:00.125) 0:16:39.488 ********* 2026-03-31 04:51:07.753781 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:07.753792 | orchestrator | 2026-03-31 04:51:07.753803 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-31 04:51:07.753819 | orchestrator | Tuesday 31 March 2026 04:51:06 +0000 (0:00:00.135) 0:16:39.624 ********* 2026-03-31 04:51:07.753838 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:51:07.753858 | orchestrator | 2026-03-31 04:51:07.753878 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-31 04:51:07.753898 | orchestrator | Tuesday 31 March 2026 04:51:07 +0000 (0:00:00.147) 0:16:39.771 ********* 2026-03-31 04:51:07.753977 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:51:07.753989 | orchestrator | 2026-03-31 04:51:07.754001 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-31 04:51:07.754011 | orchestrator | Tuesday 31 March 2026 04:51:07 +0000 (0:00:00.525) 0:16:40.296 ********* 2026-03-31 04:51:07.754102 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:18.969682 | orchestrator | 2026-03-31 04:51:18.969817 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-31 04:51:18.969835 | orchestrator | Tuesday 31 March 2026 04:51:07 +0000 (0:00:00.124) 0:16:40.421 ********* 2026-03-31 04:51:18.969848 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:18.969860 | orchestrator | 2026-03-31 04:51:18.969871 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-31 04:51:18.969883 | orchestrator | Tuesday 31 March 2026 04:51:07 +0000 (0:00:00.118) 0:16:40.539 ********* 2026-03-31 04:51:18.969894 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:18.969905 | orchestrator | 2026-03-31 04:51:18.969932 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-31 04:51:18.969944 | orchestrator | Tuesday 31 March 2026 04:51:07 +0000 (0:00:00.120) 0:16:40.660 ********* 2026-03-31 04:51:18.969968 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:18.969979 | orchestrator | 2026-03-31 04:51:18.969990 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-31 04:51:18.970001 | orchestrator | Tuesday 31 March 2026 04:51:08 +0000 (0:00:00.138) 0:16:40.798 ********* 2026-03-31 04:51:18.970012 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:18.970083 | orchestrator | 2026-03-31 04:51:18.970095 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-31 04:51:18.970106 | orchestrator | Tuesday 31 March 2026 04:51:08 +0000 (0:00:00.127) 0:16:40.926 ********* 2026-03-31 04:51:18.970138 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:18.970150 | orchestrator | 2026-03-31 04:51:18.970161 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-31 04:51:18.970182 | orchestrator | Tuesday 31 March 2026 04:51:08 +0000 (0:00:00.138) 0:16:41.064 ********* 2026-03-31 04:51:18.970193 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:18.970204 | orchestrator | 2026-03-31 04:51:18.970217 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-31 04:51:18.970230 | orchestrator | Tuesday 31 March 2026 04:51:08 +0000 (0:00:00.121) 0:16:41.186 ********* 2026-03-31 04:51:18.970242 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:18.970255 | orchestrator | 2026-03-31 04:51:18.970318 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-31 04:51:18.970340 | orchestrator | Tuesday 31 March 2026 04:51:08 +0000 (0:00:00.129) 0:16:41.315 ********* 2026-03-31 04:51:18.970358 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:18.970374 | orchestrator | 2026-03-31 04:51:18.970388 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-31 04:51:18.970400 | orchestrator | Tuesday 31 March 2026 04:51:08 +0000 (0:00:00.128) 0:16:41.444 ********* 2026-03-31 04:51:18.970413 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:18.970425 | orchestrator | 2026-03-31 04:51:18.970438 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-31 04:51:18.970451 | orchestrator | Tuesday 31 March 2026 04:51:08 +0000 (0:00:00.131) 0:16:41.575 ********* 2026-03-31 04:51:18.970463 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:18.970475 | orchestrator | 2026-03-31 04:51:18.970487 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-31 04:51:18.970500 | orchestrator | Tuesday 31 March 2026 04:51:09 +0000 (0:00:00.118) 0:16:41.693 ********* 2026-03-31 04:51:18.970512 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:18.970525 | orchestrator | 2026-03-31 04:51:18.970537 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-31 04:51:18.970549 | orchestrator | Tuesday 31 March 2026 04:51:09 +0000 (0:00:00.480) 0:16:42.174 ********* 2026-03-31 04:51:18.970562 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:51:18.970575 | orchestrator | 2026-03-31 04:51:18.970588 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-31 04:51:18.970599 | orchestrator | Tuesday 31 March 2026 04:51:10 +0000 (0:00:00.904) 0:16:43.078 ********* 2026-03-31 04:51:18.970610 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:51:18.970621 | orchestrator | 2026-03-31 04:51:18.970632 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-31 04:51:18.970643 | orchestrator | Tuesday 31 March 2026 04:51:11 +0000 (0:00:01.254) 0:16:44.332 ********* 2026-03-31 04:51:18.970654 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-03-31 04:51:18.970666 | orchestrator | 2026-03-31 04:51:18.970677 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-31 04:51:18.970688 | orchestrator | Tuesday 31 March 2026 04:51:11 +0000 (0:00:00.213) 0:16:44.546 ********* 2026-03-31 04:51:18.970699 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:18.970710 | orchestrator | 2026-03-31 04:51:18.970720 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-31 04:51:18.970731 | orchestrator | Tuesday 31 March 2026 04:51:11 +0000 (0:00:00.126) 0:16:44.673 ********* 2026-03-31 04:51:18.970742 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:18.970753 | orchestrator | 2026-03-31 04:51:18.970764 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-31 04:51:18.970774 | orchestrator | Tuesday 31 March 2026 04:51:12 +0000 (0:00:00.148) 0:16:44.821 ********* 2026-03-31 04:51:18.970785 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-31 04:51:18.970796 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-31 04:51:18.970816 | orchestrator | 2026-03-31 04:51:18.970827 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-31 04:51:18.970838 | orchestrator | Tuesday 31 March 2026 04:51:12 +0000 (0:00:00.784) 0:16:45.606 ********* 2026-03-31 04:51:18.970848 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:51:18.970859 | orchestrator | 2026-03-31 04:51:18.970870 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-31 04:51:18.970881 | orchestrator | Tuesday 31 March 2026 04:51:13 +0000 (0:00:00.473) 0:16:46.080 ********* 2026-03-31 04:51:18.970892 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:18.970903 | orchestrator | 2026-03-31 04:51:18.970933 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-31 04:51:18.970944 | orchestrator | Tuesday 31 March 2026 04:51:13 +0000 (0:00:00.168) 0:16:46.248 ********* 2026-03-31 04:51:18.970955 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:18.970966 | orchestrator | 2026-03-31 04:51:18.970977 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-31 04:51:18.970988 | orchestrator | Tuesday 31 March 2026 04:51:13 +0000 (0:00:00.147) 0:16:46.396 ********* 2026-03-31 04:51:18.970998 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:18.971009 | orchestrator | 2026-03-31 04:51:18.971020 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-31 04:51:18.971037 | orchestrator | Tuesday 31 March 2026 04:51:13 +0000 (0:00:00.127) 0:16:46.524 ********* 2026-03-31 04:51:18.971049 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-03-31 04:51:18.971060 | orchestrator | 2026-03-31 04:51:18.971071 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-31 04:51:18.971081 | orchestrator | Tuesday 31 March 2026 04:51:14 +0000 (0:00:00.478) 0:16:47.003 ********* 2026-03-31 04:51:18.971092 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:51:18.971103 | orchestrator | 2026-03-31 04:51:18.971114 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-31 04:51:18.971125 | orchestrator | Tuesday 31 March 2026 04:51:14 +0000 (0:00:00.678) 0:16:47.681 ********* 2026-03-31 04:51:18.971136 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-31 04:51:18.971146 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-31 04:51:18.971157 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-31 04:51:18.971168 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:18.971179 | orchestrator | 2026-03-31 04:51:18.971190 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-31 04:51:18.971200 | orchestrator | Tuesday 31 March 2026 04:51:15 +0000 (0:00:00.145) 0:16:47.826 ********* 2026-03-31 04:51:18.971211 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:18.971222 | orchestrator | 2026-03-31 04:51:18.971232 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-31 04:51:18.971243 | orchestrator | Tuesday 31 March 2026 04:51:15 +0000 (0:00:00.129) 0:16:47.956 ********* 2026-03-31 04:51:18.971254 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:18.971287 | orchestrator | 2026-03-31 04:51:18.971299 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-31 04:51:18.971310 | orchestrator | Tuesday 31 March 2026 04:51:15 +0000 (0:00:00.178) 0:16:48.134 ********* 2026-03-31 04:51:18.971321 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:18.971332 | orchestrator | 2026-03-31 04:51:18.971343 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-31 04:51:18.971354 | orchestrator | Tuesday 31 March 2026 04:51:15 +0000 (0:00:00.183) 0:16:48.317 ********* 2026-03-31 04:51:18.971364 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:18.971375 | orchestrator | 2026-03-31 04:51:18.971386 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-31 04:51:18.971397 | orchestrator | Tuesday 31 March 2026 04:51:15 +0000 (0:00:00.156) 0:16:48.474 ********* 2026-03-31 04:51:18.971417 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:18.971428 | orchestrator | 2026-03-31 04:51:18.971439 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-31 04:51:18.971450 | orchestrator | Tuesday 31 March 2026 04:51:15 +0000 (0:00:00.152) 0:16:48.626 ********* 2026-03-31 04:51:18.971460 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:51:18.971471 | orchestrator | 2026-03-31 04:51:18.971482 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-31 04:51:18.971493 | orchestrator | Tuesday 31 March 2026 04:51:17 +0000 (0:00:01.501) 0:16:50.128 ********* 2026-03-31 04:51:18.971504 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:51:18.971515 | orchestrator | 2026-03-31 04:51:18.971525 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-31 04:51:18.971536 | orchestrator | Tuesday 31 March 2026 04:51:17 +0000 (0:00:00.142) 0:16:50.270 ********* 2026-03-31 04:51:18.971547 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-03-31 04:51:18.971558 | orchestrator | 2026-03-31 04:51:18.971569 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-31 04:51:18.971580 | orchestrator | Tuesday 31 March 2026 04:51:17 +0000 (0:00:00.229) 0:16:50.500 ********* 2026-03-31 04:51:18.971590 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:18.971601 | orchestrator | 2026-03-31 04:51:18.971612 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-31 04:51:18.971623 | orchestrator | Tuesday 31 March 2026 04:51:17 +0000 (0:00:00.144) 0:16:50.644 ********* 2026-03-31 04:51:18.971634 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:18.971644 | orchestrator | 2026-03-31 04:51:18.971655 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-31 04:51:18.971666 | orchestrator | Tuesday 31 March 2026 04:51:18 +0000 (0:00:00.426) 0:16:51.071 ********* 2026-03-31 04:51:18.971677 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:18.971688 | orchestrator | 2026-03-31 04:51:18.971699 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-31 04:51:18.971710 | orchestrator | Tuesday 31 March 2026 04:51:18 +0000 (0:00:00.150) 0:16:51.222 ********* 2026-03-31 04:51:18.971720 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:18.971731 | orchestrator | 2026-03-31 04:51:18.971742 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-31 04:51:18.971753 | orchestrator | Tuesday 31 March 2026 04:51:18 +0000 (0:00:00.153) 0:16:51.375 ********* 2026-03-31 04:51:18.971764 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:18.971775 | orchestrator | 2026-03-31 04:51:18.971785 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-31 04:51:18.971796 | orchestrator | Tuesday 31 March 2026 04:51:18 +0000 (0:00:00.135) 0:16:51.510 ********* 2026-03-31 04:51:18.971814 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:41.218969 | orchestrator | 2026-03-31 04:51:41.219085 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-31 04:51:41.219103 | orchestrator | Tuesday 31 March 2026 04:51:18 +0000 (0:00:00.129) 0:16:51.640 ********* 2026-03-31 04:51:41.219115 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:41.219127 | orchestrator | 2026-03-31 04:51:41.219139 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-31 04:51:41.219150 | orchestrator | Tuesday 31 March 2026 04:51:19 +0000 (0:00:00.148) 0:16:51.789 ********* 2026-03-31 04:51:41.219161 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:41.219172 | orchestrator | 2026-03-31 04:51:41.219199 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-31 04:51:41.219211 | orchestrator | Tuesday 31 March 2026 04:51:19 +0000 (0:00:00.150) 0:16:51.940 ********* 2026-03-31 04:51:41.219222 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:51:41.219234 | orchestrator | 2026-03-31 04:51:41.219247 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-31 04:51:41.219339 | orchestrator | Tuesday 31 March 2026 04:51:19 +0000 (0:00:00.233) 0:16:52.173 ********* 2026-03-31 04:51:41.219361 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-03-31 04:51:41.219379 | orchestrator | 2026-03-31 04:51:41.219391 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-31 04:51:41.219403 | orchestrator | Tuesday 31 March 2026 04:51:19 +0000 (0:00:00.191) 0:16:52.365 ********* 2026-03-31 04:51:41.219414 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-03-31 04:51:41.219425 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-31 04:51:41.219436 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-31 04:51:41.219447 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-31 04:51:41.219458 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-31 04:51:41.219469 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-31 04:51:41.219479 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-31 04:51:41.219490 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-31 04:51:41.219504 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-31 04:51:41.219516 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-31 04:51:41.219529 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-31 04:51:41.219541 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-31 04:51:41.219554 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-31 04:51:41.219566 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-31 04:51:41.219578 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-03-31 04:51:41.219590 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-03-31 04:51:41.219603 | orchestrator | 2026-03-31 04:51:41.219615 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-31 04:51:41.219628 | orchestrator | Tuesday 31 March 2026 04:51:25 +0000 (0:00:05.526) 0:16:57.891 ********* 2026-03-31 04:51:41.219641 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-03-31 04:51:41.219653 | orchestrator | 2026-03-31 04:51:41.219665 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-31 04:51:41.219678 | orchestrator | Tuesday 31 March 2026 04:51:25 +0000 (0:00:00.509) 0:16:58.400 ********* 2026-03-31 04:51:41.219691 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-31 04:51:41.219704 | orchestrator | 2026-03-31 04:51:41.219717 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-31 04:51:41.219730 | orchestrator | Tuesday 31 March 2026 04:51:26 +0000 (0:00:00.514) 0:16:58.915 ********* 2026-03-31 04:51:41.219743 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-31 04:51:41.219755 | orchestrator | 2026-03-31 04:51:41.219767 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-31 04:51:41.219780 | orchestrator | Tuesday 31 March 2026 04:51:27 +0000 (0:00:00.958) 0:16:59.873 ********* 2026-03-31 04:51:41.219793 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:41.219805 | orchestrator | 2026-03-31 04:51:41.219818 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-31 04:51:41.219830 | orchestrator | Tuesday 31 March 2026 04:51:27 +0000 (0:00:00.162) 0:17:00.036 ********* 2026-03-31 04:51:41.219843 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:41.219854 | orchestrator | 2026-03-31 04:51:41.219866 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-31 04:51:41.219877 | orchestrator | Tuesday 31 March 2026 04:51:27 +0000 (0:00:00.144) 0:17:00.180 ********* 2026-03-31 04:51:41.219900 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:41.219912 | orchestrator | 2026-03-31 04:51:41.219923 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-31 04:51:41.219934 | orchestrator | Tuesday 31 March 2026 04:51:27 +0000 (0:00:00.130) 0:17:00.311 ********* 2026-03-31 04:51:41.219944 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:41.219955 | orchestrator | 2026-03-31 04:51:41.219966 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-31 04:51:41.219977 | orchestrator | Tuesday 31 March 2026 04:51:27 +0000 (0:00:00.128) 0:17:00.439 ********* 2026-03-31 04:51:41.219988 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:41.219999 | orchestrator | 2026-03-31 04:51:41.220009 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-31 04:51:41.220021 | orchestrator | Tuesday 31 March 2026 04:51:27 +0000 (0:00:00.138) 0:17:00.578 ********* 2026-03-31 04:51:41.220049 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:41.220061 | orchestrator | 2026-03-31 04:51:41.220072 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-31 04:51:41.220083 | orchestrator | Tuesday 31 March 2026 04:51:28 +0000 (0:00:00.148) 0:17:00.726 ********* 2026-03-31 04:51:41.220094 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:41.220105 | orchestrator | 2026-03-31 04:51:41.220116 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-31 04:51:41.220133 | orchestrator | Tuesday 31 March 2026 04:51:28 +0000 (0:00:00.130) 0:17:00.857 ********* 2026-03-31 04:51:41.220145 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:41.220156 | orchestrator | 2026-03-31 04:51:41.220167 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-31 04:51:41.220178 | orchestrator | Tuesday 31 March 2026 04:51:28 +0000 (0:00:00.139) 0:17:00.996 ********* 2026-03-31 04:51:41.220188 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:41.220199 | orchestrator | 2026-03-31 04:51:41.220210 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-31 04:51:41.220221 | orchestrator | Tuesday 31 March 2026 04:51:28 +0000 (0:00:00.141) 0:17:01.138 ********* 2026-03-31 04:51:41.220232 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:41.220243 | orchestrator | 2026-03-31 04:51:41.220254 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-31 04:51:41.220264 | orchestrator | Tuesday 31 March 2026 04:51:28 +0000 (0:00:00.127) 0:17:01.265 ********* 2026-03-31 04:51:41.220275 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:51:41.220304 | orchestrator | 2026-03-31 04:51:41.220316 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-31 04:51:41.220328 | orchestrator | Tuesday 31 March 2026 04:51:29 +0000 (0:00:00.534) 0:17:01.799 ********* 2026-03-31 04:51:41.220339 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-03-31 04:51:41.220350 | orchestrator | 2026-03-31 04:51:41.220361 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-31 04:51:41.220372 | orchestrator | Tuesday 31 March 2026 04:51:32 +0000 (0:00:03.722) 0:17:05.522 ********* 2026-03-31 04:51:41.220383 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-31 04:51:41.220395 | orchestrator | 2026-03-31 04:51:41.220406 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-31 04:51:41.220417 | orchestrator | Tuesday 31 March 2026 04:51:33 +0000 (0:00:00.204) 0:17:05.727 ********* 2026-03-31 04:51:41.220430 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-03-31 04:51:41.220452 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-03-31 04:51:41.220464 | orchestrator | 2026-03-31 04:51:41.220475 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-31 04:51:41.220486 | orchestrator | Tuesday 31 March 2026 04:51:39 +0000 (0:00:06.699) 0:17:12.427 ********* 2026-03-31 04:51:41.220497 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:41.220508 | orchestrator | 2026-03-31 04:51:41.220519 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-31 04:51:41.220530 | orchestrator | Tuesday 31 March 2026 04:51:39 +0000 (0:00:00.145) 0:17:12.572 ********* 2026-03-31 04:51:41.220541 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:41.220551 | orchestrator | 2026-03-31 04:51:41.220563 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-31 04:51:41.220574 | orchestrator | Tuesday 31 March 2026 04:51:40 +0000 (0:00:00.141) 0:17:12.713 ********* 2026-03-31 04:51:41.220584 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:41.220595 | orchestrator | 2026-03-31 04:51:41.220606 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-31 04:51:41.220617 | orchestrator | Tuesday 31 March 2026 04:51:40 +0000 (0:00:00.158) 0:17:12.871 ********* 2026-03-31 04:51:41.220628 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:41.220639 | orchestrator | 2026-03-31 04:51:41.220650 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-31 04:51:41.220661 | orchestrator | Tuesday 31 March 2026 04:51:40 +0000 (0:00:00.159) 0:17:13.031 ********* 2026-03-31 04:51:41.220671 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:51:41.220682 | orchestrator | 2026-03-31 04:51:41.220693 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-31 04:51:41.220704 | orchestrator | Tuesday 31 March 2026 04:51:40 +0000 (0:00:00.150) 0:17:13.181 ********* 2026-03-31 04:51:41.220715 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:51:41.220726 | orchestrator | 2026-03-31 04:51:41.220737 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-31 04:51:41.220748 | orchestrator | Tuesday 31 March 2026 04:51:40 +0000 (0:00:00.274) 0:17:13.455 ********* 2026-03-31 04:51:41.220759 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-31 04:51:41.220770 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-31 04:51:41.220788 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-31 04:52:03.098530 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:52:03.098644 | orchestrator | 2026-03-31 04:52:03.098662 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-31 04:52:03.098683 | orchestrator | Tuesday 31 March 2026 04:51:41 +0000 (0:00:00.432) 0:17:13.888 ********* 2026-03-31 04:52:03.098704 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-31 04:52:03.098723 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-31 04:52:03.098741 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-31 04:52:03.098780 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:52:03.098800 | orchestrator | 2026-03-31 04:52:03.098818 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-31 04:52:03.098838 | orchestrator | Tuesday 31 March 2026 04:51:41 +0000 (0:00:00.419) 0:17:14.307 ********* 2026-03-31 04:52:03.098858 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-31 04:52:03.098876 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-31 04:52:03.098892 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-31 04:52:03.098904 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:52:03.098939 | orchestrator | 2026-03-31 04:52:03.098951 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-31 04:52:03.098962 | orchestrator | Tuesday 31 March 2026 04:51:42 +0000 (0:00:00.762) 0:17:15.070 ********* 2026-03-31 04:52:03.098973 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:52:03.098985 | orchestrator | 2026-03-31 04:52:03.098996 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-31 04:52:03.099007 | orchestrator | Tuesday 31 March 2026 04:51:42 +0000 (0:00:00.164) 0:17:15.234 ********* 2026-03-31 04:52:03.099018 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-31 04:52:03.099029 | orchestrator | 2026-03-31 04:52:03.099040 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-31 04:52:03.099051 | orchestrator | Tuesday 31 March 2026 04:51:43 +0000 (0:00:01.098) 0:17:16.333 ********* 2026-03-31 04:52:03.099065 | orchestrator | changed: [testbed-node-4] 2026-03-31 04:52:03.099079 | orchestrator | 2026-03-31 04:52:03.099092 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-31 04:52:03.099105 | orchestrator | Tuesday 31 March 2026 04:51:44 +0000 (0:00:00.883) 0:17:17.217 ********* 2026-03-31 04:52:03.099118 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:52:03.099131 | orchestrator | 2026-03-31 04:52:03.099144 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-31 04:52:03.099157 | orchestrator | Tuesday 31 March 2026 04:51:44 +0000 (0:00:00.142) 0:17:17.360 ********* 2026-03-31 04:52:03.099169 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:52:03.099183 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:52:03.099196 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:52:03.099209 | orchestrator | 2026-03-31 04:52:03.099222 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-31 04:52:03.099236 | orchestrator | Tuesday 31 March 2026 04:51:45 +0000 (0:00:00.655) 0:17:18.015 ********* 2026-03-31 04:52:03.099248 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-4 2026-03-31 04:52:03.099261 | orchestrator | 2026-03-31 04:52:03.099274 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-31 04:52:03.099287 | orchestrator | Tuesday 31 March 2026 04:51:45 +0000 (0:00:00.208) 0:17:18.223 ********* 2026-03-31 04:52:03.099300 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:52:03.099343 | orchestrator | 2026-03-31 04:52:03.099362 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-31 04:52:03.099382 | orchestrator | Tuesday 31 March 2026 04:51:45 +0000 (0:00:00.116) 0:17:18.340 ********* 2026-03-31 04:52:03.099396 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:52:03.099410 | orchestrator | 2026-03-31 04:52:03.099423 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-31 04:52:03.099436 | orchestrator | Tuesday 31 March 2026 04:51:45 +0000 (0:00:00.138) 0:17:18.479 ********* 2026-03-31 04:52:03.099447 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:52:03.099458 | orchestrator | 2026-03-31 04:52:03.099469 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-31 04:52:03.099480 | orchestrator | Tuesday 31 March 2026 04:51:46 +0000 (0:00:00.474) 0:17:18.953 ********* 2026-03-31 04:52:03.099491 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:52:03.099502 | orchestrator | 2026-03-31 04:52:03.099513 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-31 04:52:03.099524 | orchestrator | Tuesday 31 March 2026 04:51:46 +0000 (0:00:00.159) 0:17:19.113 ********* 2026-03-31 04:52:03.099536 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-31 04:52:03.099548 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-31 04:52:03.099560 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-31 04:52:03.099580 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-31 04:52:03.099591 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-31 04:52:03.099602 | orchestrator | 2026-03-31 04:52:03.099613 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-31 04:52:03.099624 | orchestrator | Tuesday 31 March 2026 04:51:48 +0000 (0:00:01.825) 0:17:20.938 ********* 2026-03-31 04:52:03.099635 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:52:03.099646 | orchestrator | 2026-03-31 04:52:03.099657 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-31 04:52:03.099669 | orchestrator | Tuesday 31 March 2026 04:51:48 +0000 (0:00:00.444) 0:17:21.382 ********* 2026-03-31 04:52:03.099699 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-4 2026-03-31 04:52:03.099711 | orchestrator | 2026-03-31 04:52:03.099722 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-31 04:52:03.099733 | orchestrator | Tuesday 31 March 2026 04:51:48 +0000 (0:00:00.219) 0:17:21.601 ********* 2026-03-31 04:52:03.099745 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-31 04:52:03.099756 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-03-31 04:52:03.099766 | orchestrator | 2026-03-31 04:52:03.099786 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-31 04:52:03.099797 | orchestrator | Tuesday 31 March 2026 04:51:49 +0000 (0:00:00.828) 0:17:22.430 ********* 2026-03-31 04:52:03.099809 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 04:52:03.099820 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-31 04:52:03.099831 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-31 04:52:03.099848 | orchestrator | 2026-03-31 04:52:03.099867 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-31 04:52:03.099886 | orchestrator | Tuesday 31 March 2026 04:51:51 +0000 (0:00:02.133) 0:17:24.563 ********* 2026-03-31 04:52:03.099905 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-03-31 04:52:03.099923 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-31 04:52:03.099941 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:52:03.099960 | orchestrator | 2026-03-31 04:52:03.099980 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-31 04:52:03.100000 | orchestrator | Tuesday 31 March 2026 04:51:52 +0000 (0:00:00.938) 0:17:25.502 ********* 2026-03-31 04:52:03.100019 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:52:03.100039 | orchestrator | 2026-03-31 04:52:03.100058 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-31 04:52:03.100076 | orchestrator | Tuesday 31 March 2026 04:51:53 +0000 (0:00:00.238) 0:17:25.741 ********* 2026-03-31 04:52:03.100092 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:52:03.100103 | orchestrator | 2026-03-31 04:52:03.100114 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-31 04:52:03.100125 | orchestrator | Tuesday 31 March 2026 04:51:53 +0000 (0:00:00.142) 0:17:25.883 ********* 2026-03-31 04:52:03.100136 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:52:03.100147 | orchestrator | 2026-03-31 04:52:03.100158 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-31 04:52:03.100169 | orchestrator | Tuesday 31 March 2026 04:51:53 +0000 (0:00:00.140) 0:17:26.024 ********* 2026-03-31 04:52:03.100180 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-4 2026-03-31 04:52:03.100190 | orchestrator | 2026-03-31 04:52:03.100201 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-31 04:52:03.100212 | orchestrator | Tuesday 31 March 2026 04:51:53 +0000 (0:00:00.214) 0:17:26.239 ********* 2026-03-31 04:52:03.100223 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:52:03.100234 | orchestrator | 2026-03-31 04:52:03.100245 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-31 04:52:03.100266 | orchestrator | Tuesday 31 March 2026 04:51:54 +0000 (0:00:00.456) 0:17:26.696 ********* 2026-03-31 04:52:03.100278 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:52:03.100288 | orchestrator | 2026-03-31 04:52:03.100299 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-31 04:52:03.100332 | orchestrator | Tuesday 31 March 2026 04:51:56 +0000 (0:00:02.337) 0:17:29.033 ********* 2026-03-31 04:52:03.100344 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-4 2026-03-31 04:52:03.100355 | orchestrator | 2026-03-31 04:52:03.100366 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-31 04:52:03.100376 | orchestrator | Tuesday 31 March 2026 04:51:56 +0000 (0:00:00.477) 0:17:29.511 ********* 2026-03-31 04:52:03.100387 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:52:03.100398 | orchestrator | 2026-03-31 04:52:03.100409 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-31 04:52:03.100420 | orchestrator | Tuesday 31 March 2026 04:51:57 +0000 (0:00:00.969) 0:17:30.481 ********* 2026-03-31 04:52:03.100431 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:52:03.100442 | orchestrator | 2026-03-31 04:52:03.100453 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-31 04:52:03.100463 | orchestrator | Tuesday 31 March 2026 04:51:58 +0000 (0:00:00.968) 0:17:31.449 ********* 2026-03-31 04:52:03.100474 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:52:03.100485 | orchestrator | 2026-03-31 04:52:03.100496 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-31 04:52:03.100507 | orchestrator | Tuesday 31 March 2026 04:52:00 +0000 (0:00:01.281) 0:17:32.731 ********* 2026-03-31 04:52:03.100518 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:52:03.100529 | orchestrator | 2026-03-31 04:52:03.100540 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-31 04:52:03.100551 | orchestrator | Tuesday 31 March 2026 04:52:00 +0000 (0:00:00.143) 0:17:32.874 ********* 2026-03-31 04:52:03.100562 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:52:03.100573 | orchestrator | 2026-03-31 04:52:03.100584 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-31 04:52:03.100595 | orchestrator | Tuesday 31 March 2026 04:52:00 +0000 (0:00:00.144) 0:17:33.018 ********* 2026-03-31 04:52:03.100605 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-03-31 04:52:03.100616 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-03-31 04:52:03.100627 | orchestrator | 2026-03-31 04:52:03.100638 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-31 04:52:03.100649 | orchestrator | Tuesday 31 March 2026 04:52:01 +0000 (0:00:00.844) 0:17:33.863 ********* 2026-03-31 04:52:03.100660 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-03-31 04:52:03.100671 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-03-31 04:52:03.100682 | orchestrator | 2026-03-31 04:52:03.100693 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-31 04:52:03.100712 | orchestrator | Tuesday 31 March 2026 04:52:03 +0000 (0:00:01.897) 0:17:35.760 ********* 2026-03-31 04:52:32.458885 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-31 04:52:32.459032 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-03-31 04:52:32.459059 | orchestrator | 2026-03-31 04:52:32.459078 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-31 04:52:32.459097 | orchestrator | Tuesday 31 March 2026 04:52:06 +0000 (0:00:03.646) 0:17:39.407 ********* 2026-03-31 04:52:32.459114 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:52:32.459125 | orchestrator | 2026-03-31 04:52:32.459152 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-31 04:52:32.459162 | orchestrator | Tuesday 31 March 2026 04:52:06 +0000 (0:00:00.226) 0:17:39.633 ********* 2026-03-31 04:52:32.459172 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:52:32.459182 | orchestrator | 2026-03-31 04:52:32.459192 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-31 04:52:32.459226 | orchestrator | Tuesday 31 March 2026 04:52:07 +0000 (0:00:00.234) 0:17:39.868 ********* 2026-03-31 04:52:32.459236 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:52:32.459246 | orchestrator | 2026-03-31 04:52:32.459256 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-03-31 04:52:32.459266 | orchestrator | Tuesday 31 March 2026 04:52:07 +0000 (0:00:00.575) 0:17:40.444 ********* 2026-03-31 04:52:32.459275 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:52:32.459285 | orchestrator | 2026-03-31 04:52:32.459295 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-03-31 04:52:32.459305 | orchestrator | Tuesday 31 March 2026 04:52:07 +0000 (0:00:00.157) 0:17:40.601 ********* 2026-03-31 04:52:32.459314 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:52:32.459324 | orchestrator | 2026-03-31 04:52:32.459334 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-03-31 04:52:32.459371 | orchestrator | Tuesday 31 March 2026 04:52:08 +0000 (0:00:00.131) 0:17:40.733 ********* 2026-03-31 04:52:32.459381 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-03-31 04:52:32.459394 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-03-31 04:52:32.459405 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (598 retries left). 2026-03-31 04:52:32.459416 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (597 retries left). 2026-03-31 04:52:32.459427 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (596 retries left). 2026-03-31 04:52:32.459439 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-31 04:52:32.459450 | orchestrator | 2026-03-31 04:52:32.459461 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-31 04:52:32.459472 | orchestrator | Tuesday 31 March 2026 04:52:23 +0000 (0:00:15.851) 0:17:56.584 ********* 2026-03-31 04:52:32.459483 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:52:32.459493 | orchestrator | 2026-03-31 04:52:32.459502 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-31 04:52:32.459511 | orchestrator | Tuesday 31 March 2026 04:52:24 +0000 (0:00:00.134) 0:17:56.719 ********* 2026-03-31 04:52:32.459520 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:52:32.459529 | orchestrator | 2026-03-31 04:52:32.459538 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-31 04:52:32.459547 | orchestrator | Tuesday 31 March 2026 04:52:24 +0000 (0:00:00.140) 0:17:56.859 ********* 2026-03-31 04:52:32.459556 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:52:32.459565 | orchestrator | 2026-03-31 04:52:32.459574 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-31 04:52:32.459583 | orchestrator | Tuesday 31 March 2026 04:52:24 +0000 (0:00:00.125) 0:17:56.984 ********* 2026-03-31 04:52:32.459592 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:52:32.459601 | orchestrator | 2026-03-31 04:52:32.459610 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-31 04:52:32.459619 | orchestrator | Tuesday 31 March 2026 04:52:24 +0000 (0:00:00.129) 0:17:57.114 ********* 2026-03-31 04:52:32.459629 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:52:32.459638 | orchestrator | 2026-03-31 04:52:32.459646 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-31 04:52:32.459655 | orchestrator | Tuesday 31 March 2026 04:52:24 +0000 (0:00:00.134) 0:17:57.248 ********* 2026-03-31 04:52:32.459665 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:52:32.459674 | orchestrator | 2026-03-31 04:52:32.459683 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-31 04:52:32.459692 | orchestrator | Tuesday 31 March 2026 04:52:24 +0000 (0:00:00.121) 0:17:57.370 ********* 2026-03-31 04:52:32.459701 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:52:32.459717 | orchestrator | 2026-03-31 04:52:32.459726 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-03-31 04:52:32.459735 | orchestrator | 2026-03-31 04:52:32.459744 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-31 04:52:32.459753 | orchestrator | Tuesday 31 March 2026 04:52:24 +0000 (0:00:00.219) 0:17:57.589 ********* 2026-03-31 04:52:32.459761 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-03-31 04:52:32.459769 | orchestrator | 2026-03-31 04:52:32.459776 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-31 04:52:32.459784 | orchestrator | Tuesday 31 March 2026 04:52:25 +0000 (0:00:00.556) 0:17:58.146 ********* 2026-03-31 04:52:32.459792 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:52:32.459800 | orchestrator | 2026-03-31 04:52:32.459808 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-31 04:52:32.459816 | orchestrator | Tuesday 31 March 2026 04:52:25 +0000 (0:00:00.457) 0:17:58.604 ********* 2026-03-31 04:52:32.459824 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:52:32.459831 | orchestrator | 2026-03-31 04:52:32.459858 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-31 04:52:32.459867 | orchestrator | Tuesday 31 March 2026 04:52:26 +0000 (0:00:00.142) 0:17:58.746 ********* 2026-03-31 04:52:32.459874 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:52:32.459882 | orchestrator | 2026-03-31 04:52:32.459890 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-31 04:52:32.459898 | orchestrator | Tuesday 31 March 2026 04:52:26 +0000 (0:00:00.433) 0:17:59.180 ********* 2026-03-31 04:52:32.459905 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:52:32.459913 | orchestrator | 2026-03-31 04:52:32.459925 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-31 04:52:32.459933 | orchestrator | Tuesday 31 March 2026 04:52:26 +0000 (0:00:00.152) 0:17:59.332 ********* 2026-03-31 04:52:32.459941 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:52:32.460027 | orchestrator | 2026-03-31 04:52:32.460038 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-31 04:52:32.460046 | orchestrator | Tuesday 31 March 2026 04:52:26 +0000 (0:00:00.141) 0:17:59.473 ********* 2026-03-31 04:52:32.460054 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:52:32.460062 | orchestrator | 2026-03-31 04:52:32.460070 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-31 04:52:32.460078 | orchestrator | Tuesday 31 March 2026 04:52:26 +0000 (0:00:00.153) 0:17:59.627 ********* 2026-03-31 04:52:32.460086 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:52:32.460094 | orchestrator | 2026-03-31 04:52:32.460101 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-31 04:52:32.460109 | orchestrator | Tuesday 31 March 2026 04:52:27 +0000 (0:00:00.151) 0:17:59.778 ********* 2026-03-31 04:52:32.460117 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:52:32.460125 | orchestrator | 2026-03-31 04:52:32.460133 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-31 04:52:32.460140 | orchestrator | Tuesday 31 March 2026 04:52:27 +0000 (0:00:00.145) 0:17:59.924 ********* 2026-03-31 04:52:32.460148 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:52:32.460156 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:52:32.460164 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:52:32.460172 | orchestrator | 2026-03-31 04:52:32.460180 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-31 04:52:32.460188 | orchestrator | Tuesday 31 March 2026 04:52:28 +0000 (0:00:00.978) 0:18:00.902 ********* 2026-03-31 04:52:32.460195 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:52:32.460203 | orchestrator | 2026-03-31 04:52:32.460211 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-31 04:52:32.460219 | orchestrator | Tuesday 31 March 2026 04:52:28 +0000 (0:00:00.308) 0:18:01.211 ********* 2026-03-31 04:52:32.460234 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:52:32.460242 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:52:32.460249 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:52:32.460257 | orchestrator | 2026-03-31 04:52:32.460265 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-31 04:52:32.460273 | orchestrator | Tuesday 31 March 2026 04:52:30 +0000 (0:00:02.085) 0:18:03.296 ********* 2026-03-31 04:52:32.460281 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-31 04:52:32.460289 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-31 04:52:32.460297 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-31 04:52:32.460305 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:52:32.460313 | orchestrator | 2026-03-31 04:52:32.460320 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-31 04:52:32.460328 | orchestrator | Tuesday 31 March 2026 04:52:31 +0000 (0:00:01.047) 0:18:04.343 ********* 2026-03-31 04:52:32.460338 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-31 04:52:32.460365 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-31 04:52:32.460373 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-31 04:52:32.460381 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:52:32.460389 | orchestrator | 2026-03-31 04:52:32.460397 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-31 04:52:32.460405 | orchestrator | Tuesday 31 March 2026 04:52:32 +0000 (0:00:00.627) 0:18:04.971 ********* 2026-03-31 04:52:32.460416 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 04:52:32.460438 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 04:52:36.378980 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 04:52:36.379080 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:52:36.379097 | orchestrator | 2026-03-31 04:52:36.379108 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-31 04:52:36.379120 | orchestrator | Tuesday 31 March 2026 04:52:32 +0000 (0:00:00.156) 0:18:05.127 ********* 2026-03-31 04:52:36.379165 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '2a470704af4f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-31 04:52:29.376893', 'end': '2026-03-31 04:52:29.421363', 'delta': '0:00:00.044470', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2a470704af4f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-31 04:52:36.379186 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '72281537ffe8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-31 04:52:29.888165', 'end': '2026-03-31 04:52:29.935565', 'delta': '0:00:00.047400', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['72281537ffe8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-31 04:52:36.379203 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '4f3969f3506a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-31 04:52:30.435052', 'end': '2026-03-31 04:52:30.484349', 'delta': '0:00:00.049297', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['4f3969f3506a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-31 04:52:36.379220 | orchestrator | 2026-03-31 04:52:36.379236 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-31 04:52:36.379251 | orchestrator | Tuesday 31 March 2026 04:52:32 +0000 (0:00:00.201) 0:18:05.329 ********* 2026-03-31 04:52:36.379268 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:52:36.379285 | orchestrator | 2026-03-31 04:52:36.379302 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-31 04:52:36.379319 | orchestrator | Tuesday 31 March 2026 04:52:32 +0000 (0:00:00.283) 0:18:05.613 ********* 2026-03-31 04:52:36.379335 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:52:36.379424 | orchestrator | 2026-03-31 04:52:36.379436 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-31 04:52:36.379446 | orchestrator | Tuesday 31 March 2026 04:52:33 +0000 (0:00:00.249) 0:18:05.863 ********* 2026-03-31 04:52:36.379456 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:52:36.379466 | orchestrator | 2026-03-31 04:52:36.379476 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-31 04:52:36.379485 | orchestrator | Tuesday 31 March 2026 04:52:33 +0000 (0:00:00.151) 0:18:06.014 ********* 2026-03-31 04:52:36.379495 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-31 04:52:36.379505 | orchestrator | 2026-03-31 04:52:36.379518 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-31 04:52:36.379529 | orchestrator | Tuesday 31 March 2026 04:52:34 +0000 (0:00:00.909) 0:18:06.924 ********* 2026-03-31 04:52:36.379540 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:52:36.379558 | orchestrator | 2026-03-31 04:52:36.379575 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-31 04:52:36.379627 | orchestrator | Tuesday 31 March 2026 04:52:34 +0000 (0:00:00.147) 0:18:07.071 ********* 2026-03-31 04:52:36.379672 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:52:36.379692 | orchestrator | 2026-03-31 04:52:36.379710 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-31 04:52:36.379724 | orchestrator | Tuesday 31 March 2026 04:52:34 +0000 (0:00:00.127) 0:18:07.198 ********* 2026-03-31 04:52:36.379736 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:52:36.379747 | orchestrator | 2026-03-31 04:52:36.379758 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-31 04:52:36.379771 | orchestrator | Tuesday 31 March 2026 04:52:34 +0000 (0:00:00.227) 0:18:07.426 ********* 2026-03-31 04:52:36.379782 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:52:36.379792 | orchestrator | 2026-03-31 04:52:36.379801 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-31 04:52:36.379811 | orchestrator | Tuesday 31 March 2026 04:52:34 +0000 (0:00:00.121) 0:18:07.548 ********* 2026-03-31 04:52:36.379821 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:52:36.379830 | orchestrator | 2026-03-31 04:52:36.379840 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-31 04:52:36.379850 | orchestrator | Tuesday 31 March 2026 04:52:34 +0000 (0:00:00.122) 0:18:07.670 ********* 2026-03-31 04:52:36.379860 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:52:36.379878 | orchestrator | 2026-03-31 04:52:36.379893 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-31 04:52:36.379908 | orchestrator | Tuesday 31 March 2026 04:52:35 +0000 (0:00:00.543) 0:18:08.214 ********* 2026-03-31 04:52:36.379923 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:52:36.379938 | orchestrator | 2026-03-31 04:52:36.379952 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-31 04:52:36.379967 | orchestrator | Tuesday 31 March 2026 04:52:35 +0000 (0:00:00.123) 0:18:08.337 ********* 2026-03-31 04:52:36.379983 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:52:36.380001 | orchestrator | 2026-03-31 04:52:36.380018 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-31 04:52:36.380035 | orchestrator | Tuesday 31 March 2026 04:52:35 +0000 (0:00:00.188) 0:18:08.526 ********* 2026-03-31 04:52:36.380050 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:52:36.380067 | orchestrator | 2026-03-31 04:52:36.380084 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-31 04:52:36.380102 | orchestrator | Tuesday 31 March 2026 04:52:35 +0000 (0:00:00.127) 0:18:08.654 ********* 2026-03-31 04:52:36.380120 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:52:36.380137 | orchestrator | 2026-03-31 04:52:36.380153 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-31 04:52:36.380164 | orchestrator | Tuesday 31 March 2026 04:52:36 +0000 (0:00:00.177) 0:18:08.831 ********* 2026-03-31 04:52:36.380175 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:52:36.380188 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--185c377e--da3e--5428--98db--747be321d2f9-osd--block--185c377e--da3e--5428--98db--747be321d2f9', 'dm-uuid-LVM-x16wR0JSkJwOUat6KB2RjtOnd6k2ruBp3Senp6or7C3BHvrbv8KuFHdSdmwvdICC'], 'uuids': ['4a48fb33-b599-4c4d-a815-d018d343a3ff'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0036be6c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['3Senp6-or7C-3BHv-rbv8-KuFH-dSdm-wvdICC']}})  2026-03-31 04:52:36.380211 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1382055-b12a-4a0d-90b0-6b0bf5b2002d', 'scsi-SQEMU_QEMU_HARDDISK_d1382055-b12a-4a0d-90b0-6b0bf5b2002d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd1382055', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-31 04:52:36.380241 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-bwm83I-k31i-pwme-XT9I-9Z0g-1hP0-CwgXOd', 'scsi-0QEMU_QEMU_HARDDISK_cee620fc-9fd6-4c5e-b237-9b955e0088ae', 'scsi-SQEMU_QEMU_HARDDISK_cee620fc-9fd6-4c5e-b237-9b955e0088ae'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'cee620fc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--07ced279--a583--5107--8220--95f80fc10ac7-osd--block--07ced279--a583--5107--8220--95f80fc10ac7']}})  2026-03-31 04:52:36.511083 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:52:36.511187 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:52:36.511203 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-31-01-38-44-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-31 04:52:36.511218 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:52:36.511229 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-yKTWsV-enR4-4CqY-2klB-eRO2-fR5A-XJ6GI1', 'dm-uuid-CRYPT-LUKS2-74b5eafc2cf149539043240c66b113f2-yKTWsV-enR4-4CqY-2klB-eRO2-fR5A-XJ6GI1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-31 04:52:36.511264 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:52:36.511276 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--07ced279--a583--5107--8220--95f80fc10ac7-osd--block--07ced279--a583--5107--8220--95f80fc10ac7', 'dm-uuid-LVM-4Lb9QdMZv1ai74sfHiNB7SWQCThlMxSwyKTWsVenR44CqY2klBeRO2fR5AXJ6GI1'], 'uuids': ['74b5eafc-2cf1-4953-9043-240c66b113f2'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'cee620fc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['yKTWsV-enR4-4CqY-2klB-eRO2-fR5A-XJ6GI1']}})  2026-03-31 04:52:36.511321 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-zgTsa4-r5F1-H4rU-9oqC-nOys-qaba-d4ei1Y', 'scsi-0QEMU_QEMU_HARDDISK_0036be6c-41d0-4a1c-804a-c8bed222bda7', 'scsi-SQEMU_QEMU_HARDDISK_0036be6c-41d0-4a1c-804a-c8bed222bda7'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0036be6c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--185c377e--da3e--5428--98db--747be321d2f9-osd--block--185c377e--da3e--5428--98db--747be321d2f9']}})  2026-03-31 04:52:36.511334 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:52:36.511407 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f91d726b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part16', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part14', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part15', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part1', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-31 04:52:36.511430 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:52:36.511441 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:52:36.511472 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-3Senp6-or7C-3BHv-rbv8-KuFH-dSdm-wvdICC', 'dm-uuid-CRYPT-LUKS2-4a48fb33b5994c4da815d018d343a3ff-3Senp6-or7C-3BHv-rbv8-KuFH-dSdm-wvdICC'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-31 04:52:36.722871 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:52:36.722989 | orchestrator | 2026-03-31 04:52:36.723015 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-31 04:52:36.723034 | orchestrator | Tuesday 31 March 2026 04:52:36 +0000 (0:00:00.353) 0:18:09.185 ********* 2026-03-31 04:52:36.723057 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:52:36.723079 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--185c377e--da3e--5428--98db--747be321d2f9-osd--block--185c377e--da3e--5428--98db--747be321d2f9', 'dm-uuid-LVM-x16wR0JSkJwOUat6KB2RjtOnd6k2ruBp3Senp6or7C3BHvrbv8KuFHdSdmwvdICC'], 'uuids': ['4a48fb33-b599-4c4d-a815-d018d343a3ff'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0036be6c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['3Senp6-or7C-3BHv-rbv8-KuFH-dSdm-wvdICC']}}, 'ansible_loop_var': 'item'})  2026-03-31 04:52:36.723094 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1382055-b12a-4a0d-90b0-6b0bf5b2002d', 'scsi-SQEMU_QEMU_HARDDISK_d1382055-b12a-4a0d-90b0-6b0bf5b2002d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd1382055', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:52:36.723135 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-bwm83I-k31i-pwme-XT9I-9Z0g-1hP0-CwgXOd', 'scsi-0QEMU_QEMU_HARDDISK_cee620fc-9fd6-4c5e-b237-9b955e0088ae', 'scsi-SQEMU_QEMU_HARDDISK_cee620fc-9fd6-4c5e-b237-9b955e0088ae'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'cee620fc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--07ced279--a583--5107--8220--95f80fc10ac7-osd--block--07ced279--a583--5107--8220--95f80fc10ac7']}}, 'ansible_loop_var': 'item'})  2026-03-31 04:52:36.723185 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:52:36.723200 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:52:36.723213 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-31-01-38-44-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:52:36.723225 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:52:36.723245 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-yKTWsV-enR4-4CqY-2klB-eRO2-fR5A-XJ6GI1', 'dm-uuid-CRYPT-LUKS2-74b5eafc2cf149539043240c66b113f2-yKTWsV-enR4-4CqY-2klB-eRO2-fR5A-XJ6GI1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:52:36.723257 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:52:36.723281 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--07ced279--a583--5107--8220--95f80fc10ac7-osd--block--07ced279--a583--5107--8220--95f80fc10ac7', 'dm-uuid-LVM-4Lb9QdMZv1ai74sfHiNB7SWQCThlMxSwyKTWsVenR44CqY2klBeRO2fR5AXJ6GI1'], 'uuids': ['74b5eafc-2cf1-4953-9043-240c66b113f2'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'cee620fc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['yKTWsV-enR4-4CqY-2klB-eRO2-fR5A-XJ6GI1']}}, 'ansible_loop_var': 'item'})  2026-03-31 04:52:40.265737 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-zgTsa4-r5F1-H4rU-9oqC-nOys-qaba-d4ei1Y', 'scsi-0QEMU_QEMU_HARDDISK_0036be6c-41d0-4a1c-804a-c8bed222bda7', 'scsi-SQEMU_QEMU_HARDDISK_0036be6c-41d0-4a1c-804a-c8bed222bda7'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0036be6c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--185c377e--da3e--5428--98db--747be321d2f9-osd--block--185c377e--da3e--5428--98db--747be321d2f9']}}, 'ansible_loop_var': 'item'})  2026-03-31 04:52:40.265856 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:52:40.265918 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f91d726b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part16', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part14', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part15', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part1', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:52:40.265953 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:52:40.265968 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:52:40.266000 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-3Senp6-or7C-3BHv-rbv8-KuFH-dSdm-wvdICC', 'dm-uuid-CRYPT-LUKS2-4a48fb33b5994c4da815d018d343a3ff-3Senp6-or7C-3BHv-rbv8-KuFH-dSdm-wvdICC'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:52:40.266099 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:52:40.266115 | orchestrator | 2026-03-31 04:52:40.266128 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-31 04:52:40.266140 | orchestrator | Tuesday 31 March 2026 04:52:36 +0000 (0:00:00.398) 0:18:09.584 ********* 2026-03-31 04:52:40.266151 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:52:40.266163 | orchestrator | 2026-03-31 04:52:40.266174 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-31 04:52:40.266186 | orchestrator | Tuesday 31 March 2026 04:52:37 +0000 (0:00:00.524) 0:18:10.109 ********* 2026-03-31 04:52:40.266197 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:52:40.266208 | orchestrator | 2026-03-31 04:52:40.266219 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-31 04:52:40.266230 | orchestrator | Tuesday 31 March 2026 04:52:37 +0000 (0:00:00.129) 0:18:10.238 ********* 2026-03-31 04:52:40.266241 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:52:40.266252 | orchestrator | 2026-03-31 04:52:40.266265 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-31 04:52:40.266277 | orchestrator | Tuesday 31 March 2026 04:52:38 +0000 (0:00:00.475) 0:18:10.714 ********* 2026-03-31 04:52:40.266290 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:52:40.266302 | orchestrator | 2026-03-31 04:52:40.266315 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-31 04:52:40.266327 | orchestrator | Tuesday 31 March 2026 04:52:38 +0000 (0:00:00.125) 0:18:10.839 ********* 2026-03-31 04:52:40.266340 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:52:40.266426 | orchestrator | 2026-03-31 04:52:40.266440 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-31 04:52:40.266453 | orchestrator | Tuesday 31 March 2026 04:52:38 +0000 (0:00:00.236) 0:18:11.075 ********* 2026-03-31 04:52:40.266467 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:52:40.266479 | orchestrator | 2026-03-31 04:52:40.266492 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-31 04:52:40.266505 | orchestrator | Tuesday 31 March 2026 04:52:38 +0000 (0:00:00.142) 0:18:11.218 ********* 2026-03-31 04:52:40.266519 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-31 04:52:40.266532 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-31 04:52:40.266544 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-31 04:52:40.266557 | orchestrator | 2026-03-31 04:52:40.266577 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-31 04:52:40.266590 | orchestrator | Tuesday 31 March 2026 04:52:39 +0000 (0:00:01.353) 0:18:12.572 ********* 2026-03-31 04:52:40.266604 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-31 04:52:40.266617 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-31 04:52:40.266629 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-31 04:52:40.266640 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:52:40.266651 | orchestrator | 2026-03-31 04:52:40.266662 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-31 04:52:40.266673 | orchestrator | Tuesday 31 March 2026 04:52:40 +0000 (0:00:00.156) 0:18:12.728 ********* 2026-03-31 04:52:40.266684 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-03-31 04:52:40.266696 | orchestrator | 2026-03-31 04:52:40.266727 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-31 04:52:54.343460 | orchestrator | Tuesday 31 March 2026 04:52:40 +0000 (0:00:00.211) 0:18:12.940 ********* 2026-03-31 04:52:54.343611 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:52:54.343638 | orchestrator | 2026-03-31 04:52:54.343660 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-31 04:52:54.343680 | orchestrator | Tuesday 31 March 2026 04:52:40 +0000 (0:00:00.152) 0:18:13.092 ********* 2026-03-31 04:52:54.343700 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:52:54.343719 | orchestrator | 2026-03-31 04:52:54.343739 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-31 04:52:54.343759 | orchestrator | Tuesday 31 March 2026 04:52:40 +0000 (0:00:00.163) 0:18:13.256 ********* 2026-03-31 04:52:54.343779 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:52:54.343798 | orchestrator | 2026-03-31 04:52:54.343817 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-31 04:52:54.343837 | orchestrator | Tuesday 31 March 2026 04:52:40 +0000 (0:00:00.158) 0:18:13.415 ********* 2026-03-31 04:52:54.343857 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:52:54.343879 | orchestrator | 2026-03-31 04:52:54.343899 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-31 04:52:54.343918 | orchestrator | Tuesday 31 March 2026 04:52:40 +0000 (0:00:00.254) 0:18:13.670 ********* 2026-03-31 04:52:54.343937 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-31 04:52:54.343957 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-31 04:52:54.343978 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-31 04:52:54.344000 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:52:54.344021 | orchestrator | 2026-03-31 04:52:54.344043 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-31 04:52:54.344066 | orchestrator | Tuesday 31 March 2026 04:52:41 +0000 (0:00:00.399) 0:18:14.069 ********* 2026-03-31 04:52:54.344089 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-31 04:52:54.344113 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-31 04:52:54.344135 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-31 04:52:54.344158 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:52:54.344181 | orchestrator | 2026-03-31 04:52:54.344204 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-31 04:52:54.344228 | orchestrator | Tuesday 31 March 2026 04:52:41 +0000 (0:00:00.389) 0:18:14.459 ********* 2026-03-31 04:52:54.344249 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-31 04:52:54.344273 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-31 04:52:54.344296 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-31 04:52:54.344320 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:52:54.344344 | orchestrator | 2026-03-31 04:52:54.344394 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-31 04:52:54.344417 | orchestrator | Tuesday 31 March 2026 04:52:42 +0000 (0:00:00.379) 0:18:14.839 ********* 2026-03-31 04:52:54.344438 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:52:54.344458 | orchestrator | 2026-03-31 04:52:54.344479 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-31 04:52:54.344497 | orchestrator | Tuesday 31 March 2026 04:52:42 +0000 (0:00:00.153) 0:18:14.992 ********* 2026-03-31 04:52:54.344517 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-31 04:52:54.344538 | orchestrator | 2026-03-31 04:52:54.344558 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-31 04:52:54.344578 | orchestrator | Tuesday 31 March 2026 04:52:42 +0000 (0:00:00.325) 0:18:15.318 ********* 2026-03-31 04:52:54.344598 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:52:54.344620 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:52:54.344673 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:52:54.344693 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-31 04:52:54.344711 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-31 04:52:54.344729 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-03-31 04:52:54.344747 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-31 04:52:54.344764 | orchestrator | 2026-03-31 04:52:54.344783 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-31 04:52:54.344799 | orchestrator | Tuesday 31 March 2026 04:52:43 +0000 (0:00:01.078) 0:18:16.396 ********* 2026-03-31 04:52:54.344818 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:52:54.344856 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:52:54.344877 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:52:54.344897 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-31 04:52:54.344917 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-31 04:52:54.344936 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-03-31 04:52:54.345024 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-31 04:52:54.345050 | orchestrator | 2026-03-31 04:52:54.345070 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-03-31 04:52:54.345087 | orchestrator | Tuesday 31 March 2026 04:52:45 +0000 (0:00:02.215) 0:18:18.612 ********* 2026-03-31 04:52:54.345099 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:52:54.345111 | orchestrator | 2026-03-31 04:52:54.345122 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-03-31 04:52:54.345155 | orchestrator | Tuesday 31 March 2026 04:52:46 +0000 (0:00:00.448) 0:18:19.060 ********* 2026-03-31 04:52:54.345167 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:52:54.345178 | orchestrator | 2026-03-31 04:52:54.345189 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-03-31 04:52:54.345200 | orchestrator | Tuesday 31 March 2026 04:52:46 +0000 (0:00:00.139) 0:18:19.199 ********* 2026-03-31 04:52:54.345211 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:52:54.345222 | orchestrator | 2026-03-31 04:52:54.345233 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-03-31 04:52:54.345244 | orchestrator | Tuesday 31 March 2026 04:52:46 +0000 (0:00:00.236) 0:18:19.436 ********* 2026-03-31 04:52:54.345255 | orchestrator | changed: [testbed-node-5] => (item=0) 2026-03-31 04:52:54.345266 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-03-31 04:52:54.345277 | orchestrator | 2026-03-31 04:52:54.345288 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-31 04:52:54.345299 | orchestrator | Tuesday 31 March 2026 04:52:49 +0000 (0:00:02.925) 0:18:22.361 ********* 2026-03-31 04:52:54.345309 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-03-31 04:52:54.345321 | orchestrator | 2026-03-31 04:52:54.345332 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-31 04:52:54.345344 | orchestrator | Tuesday 31 March 2026 04:52:49 +0000 (0:00:00.187) 0:18:22.549 ********* 2026-03-31 04:52:54.345355 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-03-31 04:52:54.345410 | orchestrator | 2026-03-31 04:52:54.345423 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-31 04:52:54.345434 | orchestrator | Tuesday 31 March 2026 04:52:50 +0000 (0:00:00.213) 0:18:22.762 ********* 2026-03-31 04:52:54.345445 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:52:54.345472 | orchestrator | 2026-03-31 04:52:54.345483 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-31 04:52:54.345494 | orchestrator | Tuesday 31 March 2026 04:52:50 +0000 (0:00:00.126) 0:18:22.888 ********* 2026-03-31 04:52:54.345505 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:52:54.345516 | orchestrator | 2026-03-31 04:52:54.345527 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-31 04:52:54.345538 | orchestrator | Tuesday 31 March 2026 04:52:50 +0000 (0:00:00.522) 0:18:23.411 ********* 2026-03-31 04:52:54.345549 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:52:54.345560 | orchestrator | 2026-03-31 04:52:54.345571 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-31 04:52:54.345582 | orchestrator | Tuesday 31 March 2026 04:52:51 +0000 (0:00:00.520) 0:18:23.932 ********* 2026-03-31 04:52:54.345593 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:52:54.345604 | orchestrator | 2026-03-31 04:52:54.345615 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-31 04:52:54.345625 | orchestrator | Tuesday 31 March 2026 04:52:52 +0000 (0:00:00.794) 0:18:24.726 ********* 2026-03-31 04:52:54.345636 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:52:54.345647 | orchestrator | 2026-03-31 04:52:54.345658 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-31 04:52:54.345669 | orchestrator | Tuesday 31 March 2026 04:52:52 +0000 (0:00:00.132) 0:18:24.859 ********* 2026-03-31 04:52:54.345680 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:52:54.345691 | orchestrator | 2026-03-31 04:52:54.345702 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-31 04:52:54.345713 | orchestrator | Tuesday 31 March 2026 04:52:52 +0000 (0:00:00.129) 0:18:24.988 ********* 2026-03-31 04:52:54.345723 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:52:54.345734 | orchestrator | 2026-03-31 04:52:54.345745 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-31 04:52:54.345756 | orchestrator | Tuesday 31 March 2026 04:52:52 +0000 (0:00:00.132) 0:18:25.120 ********* 2026-03-31 04:52:54.345767 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:52:54.345778 | orchestrator | 2026-03-31 04:52:54.345789 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-31 04:52:54.345799 | orchestrator | Tuesday 31 March 2026 04:52:52 +0000 (0:00:00.521) 0:18:25.642 ********* 2026-03-31 04:52:54.345810 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:52:54.345821 | orchestrator | 2026-03-31 04:52:54.345832 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-31 04:52:54.345843 | orchestrator | Tuesday 31 March 2026 04:52:53 +0000 (0:00:00.532) 0:18:26.175 ********* 2026-03-31 04:52:54.345854 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:52:54.345865 | orchestrator | 2026-03-31 04:52:54.345876 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-31 04:52:54.345887 | orchestrator | Tuesday 31 March 2026 04:52:53 +0000 (0:00:00.136) 0:18:26.311 ********* 2026-03-31 04:52:54.345897 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:52:54.345908 | orchestrator | 2026-03-31 04:52:54.345927 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-31 04:52:54.345938 | orchestrator | Tuesday 31 March 2026 04:52:53 +0000 (0:00:00.126) 0:18:26.437 ********* 2026-03-31 04:52:54.345949 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:52:54.345960 | orchestrator | 2026-03-31 04:52:54.345971 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-31 04:52:54.345982 | orchestrator | Tuesday 31 March 2026 04:52:53 +0000 (0:00:00.158) 0:18:26.596 ********* 2026-03-31 04:52:54.345992 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:52:54.346003 | orchestrator | 2026-03-31 04:52:54.346078 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-31 04:52:54.346092 | orchestrator | Tuesday 31 March 2026 04:52:54 +0000 (0:00:00.135) 0:18:26.731 ********* 2026-03-31 04:52:54.346103 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:52:54.346121 | orchestrator | 2026-03-31 04:52:54.346133 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-31 04:52:54.346144 | orchestrator | Tuesday 31 March 2026 04:52:54 +0000 (0:00:00.147) 0:18:26.879 ********* 2026-03-31 04:52:54.346155 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:52:54.346166 | orchestrator | 2026-03-31 04:52:54.346187 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-31 04:53:05.911094 | orchestrator | Tuesday 31 March 2026 04:52:54 +0000 (0:00:00.132) 0:18:27.011 ********* 2026-03-31 04:53:05.911212 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:05.911228 | orchestrator | 2026-03-31 04:53:05.911239 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-31 04:53:05.911249 | orchestrator | Tuesday 31 March 2026 04:52:54 +0000 (0:00:00.136) 0:18:27.147 ********* 2026-03-31 04:53:05.911258 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:05.911267 | orchestrator | 2026-03-31 04:53:05.911276 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-31 04:53:05.911285 | orchestrator | Tuesday 31 March 2026 04:52:54 +0000 (0:00:00.432) 0:18:27.580 ********* 2026-03-31 04:53:05.911294 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:53:05.911305 | orchestrator | 2026-03-31 04:53:05.911314 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-31 04:53:05.911323 | orchestrator | Tuesday 31 March 2026 04:52:55 +0000 (0:00:00.154) 0:18:27.734 ********* 2026-03-31 04:53:05.911332 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:53:05.911341 | orchestrator | 2026-03-31 04:53:05.911351 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-31 04:53:05.911360 | orchestrator | Tuesday 31 March 2026 04:52:55 +0000 (0:00:00.228) 0:18:27.963 ********* 2026-03-31 04:53:05.911369 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:05.911428 | orchestrator | 2026-03-31 04:53:05.911438 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-31 04:53:05.911447 | orchestrator | Tuesday 31 March 2026 04:52:55 +0000 (0:00:00.168) 0:18:28.132 ********* 2026-03-31 04:53:05.911457 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:05.911465 | orchestrator | 2026-03-31 04:53:05.911475 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-31 04:53:05.911484 | orchestrator | Tuesday 31 March 2026 04:52:55 +0000 (0:00:00.129) 0:18:28.262 ********* 2026-03-31 04:53:05.911494 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:05.911503 | orchestrator | 2026-03-31 04:53:05.911512 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-31 04:53:05.911522 | orchestrator | Tuesday 31 March 2026 04:52:55 +0000 (0:00:00.128) 0:18:28.391 ********* 2026-03-31 04:53:05.911531 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:05.911540 | orchestrator | 2026-03-31 04:53:05.911550 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-31 04:53:05.911559 | orchestrator | Tuesday 31 March 2026 04:52:55 +0000 (0:00:00.140) 0:18:28.531 ********* 2026-03-31 04:53:05.911568 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:05.911578 | orchestrator | 2026-03-31 04:53:05.911588 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-31 04:53:05.911597 | orchestrator | Tuesday 31 March 2026 04:52:55 +0000 (0:00:00.117) 0:18:28.649 ********* 2026-03-31 04:53:05.911606 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:05.911615 | orchestrator | 2026-03-31 04:53:05.911624 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-31 04:53:05.911634 | orchestrator | Tuesday 31 March 2026 04:52:56 +0000 (0:00:00.127) 0:18:28.776 ********* 2026-03-31 04:53:05.911643 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:05.911652 | orchestrator | 2026-03-31 04:53:05.911661 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-31 04:53:05.911671 | orchestrator | Tuesday 31 March 2026 04:52:56 +0000 (0:00:00.137) 0:18:28.914 ********* 2026-03-31 04:53:05.911681 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:05.911717 | orchestrator | 2026-03-31 04:53:05.911727 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-31 04:53:05.911736 | orchestrator | Tuesday 31 March 2026 04:52:56 +0000 (0:00:00.129) 0:18:29.043 ********* 2026-03-31 04:53:05.911746 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:05.911755 | orchestrator | 2026-03-31 04:53:05.911764 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-31 04:53:05.911773 | orchestrator | Tuesday 31 March 2026 04:52:56 +0000 (0:00:00.131) 0:18:29.175 ********* 2026-03-31 04:53:05.911783 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:05.911792 | orchestrator | 2026-03-31 04:53:05.911801 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-31 04:53:05.911811 | orchestrator | Tuesday 31 March 2026 04:52:56 +0000 (0:00:00.429) 0:18:29.604 ********* 2026-03-31 04:53:05.911820 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:05.911829 | orchestrator | 2026-03-31 04:53:05.911838 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-31 04:53:05.911847 | orchestrator | Tuesday 31 March 2026 04:52:57 +0000 (0:00:00.144) 0:18:29.748 ********* 2026-03-31 04:53:05.911856 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:05.911866 | orchestrator | 2026-03-31 04:53:05.911875 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-31 04:53:05.911898 | orchestrator | Tuesday 31 March 2026 04:52:57 +0000 (0:00:00.179) 0:18:29.928 ********* 2026-03-31 04:53:05.911907 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:53:05.911916 | orchestrator | 2026-03-31 04:53:05.911923 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-31 04:53:05.911931 | orchestrator | Tuesday 31 March 2026 04:52:58 +0000 (0:00:00.904) 0:18:30.832 ********* 2026-03-31 04:53:05.911939 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:53:05.911946 | orchestrator | 2026-03-31 04:53:05.911954 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-31 04:53:05.911963 | orchestrator | Tuesday 31 March 2026 04:52:59 +0000 (0:00:01.255) 0:18:32.088 ********* 2026-03-31 04:53:05.911971 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-03-31 04:53:05.911980 | orchestrator | 2026-03-31 04:53:05.911988 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-31 04:53:05.911996 | orchestrator | Tuesday 31 March 2026 04:52:59 +0000 (0:00:00.194) 0:18:32.283 ********* 2026-03-31 04:53:05.912005 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:05.912012 | orchestrator | 2026-03-31 04:53:05.912021 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-31 04:53:05.912050 | orchestrator | Tuesday 31 March 2026 04:52:59 +0000 (0:00:00.129) 0:18:32.413 ********* 2026-03-31 04:53:05.912058 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:05.912066 | orchestrator | 2026-03-31 04:53:05.912074 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-31 04:53:05.912081 | orchestrator | Tuesday 31 March 2026 04:52:59 +0000 (0:00:00.140) 0:18:32.553 ********* 2026-03-31 04:53:05.912089 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-31 04:53:05.912097 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-31 04:53:05.912105 | orchestrator | 2026-03-31 04:53:05.912113 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-31 04:53:05.912121 | orchestrator | Tuesday 31 March 2026 04:53:00 +0000 (0:00:00.801) 0:18:33.355 ********* 2026-03-31 04:53:05.912128 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:53:05.912136 | orchestrator | 2026-03-31 04:53:05.912143 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-31 04:53:05.912150 | orchestrator | Tuesday 31 March 2026 04:53:01 +0000 (0:00:00.474) 0:18:33.829 ********* 2026-03-31 04:53:05.912158 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:05.912166 | orchestrator | 2026-03-31 04:53:05.912173 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-31 04:53:05.912193 | orchestrator | Tuesday 31 March 2026 04:53:01 +0000 (0:00:00.136) 0:18:33.965 ********* 2026-03-31 04:53:05.912200 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:05.912208 | orchestrator | 2026-03-31 04:53:05.912215 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-31 04:53:05.912223 | orchestrator | Tuesday 31 March 2026 04:53:01 +0000 (0:00:00.145) 0:18:34.110 ********* 2026-03-31 04:53:05.912231 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:05.912239 | orchestrator | 2026-03-31 04:53:05.912248 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-31 04:53:05.912256 | orchestrator | Tuesday 31 March 2026 04:53:01 +0000 (0:00:00.436) 0:18:34.546 ********* 2026-03-31 04:53:05.912263 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-03-31 04:53:05.912271 | orchestrator | 2026-03-31 04:53:05.912279 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-31 04:53:05.912286 | orchestrator | Tuesday 31 March 2026 04:53:02 +0000 (0:00:00.228) 0:18:34.775 ********* 2026-03-31 04:53:05.912294 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:53:05.912302 | orchestrator | 2026-03-31 04:53:05.912309 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-31 04:53:05.912317 | orchestrator | Tuesday 31 March 2026 04:53:02 +0000 (0:00:00.738) 0:18:35.514 ********* 2026-03-31 04:53:05.912325 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-31 04:53:05.912333 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-31 04:53:05.912341 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-31 04:53:05.912348 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:05.912356 | orchestrator | 2026-03-31 04:53:05.912364 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-31 04:53:05.912371 | orchestrator | Tuesday 31 March 2026 04:53:02 +0000 (0:00:00.141) 0:18:35.655 ********* 2026-03-31 04:53:05.912405 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:05.912414 | orchestrator | 2026-03-31 04:53:05.912423 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-31 04:53:05.912431 | orchestrator | Tuesday 31 March 2026 04:53:03 +0000 (0:00:00.131) 0:18:35.786 ********* 2026-03-31 04:53:05.912439 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:05.912447 | orchestrator | 2026-03-31 04:53:05.912455 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-31 04:53:05.912463 | orchestrator | Tuesday 31 March 2026 04:53:03 +0000 (0:00:00.163) 0:18:35.950 ********* 2026-03-31 04:53:05.912470 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:05.912478 | orchestrator | 2026-03-31 04:53:05.912486 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-31 04:53:05.912510 | orchestrator | Tuesday 31 March 2026 04:53:03 +0000 (0:00:00.150) 0:18:36.100 ********* 2026-03-31 04:53:05.912521 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:05.912539 | orchestrator | 2026-03-31 04:53:05.912546 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-31 04:53:05.912554 | orchestrator | Tuesday 31 March 2026 04:53:03 +0000 (0:00:00.138) 0:18:36.239 ********* 2026-03-31 04:53:05.912561 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:05.912570 | orchestrator | 2026-03-31 04:53:05.912578 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-31 04:53:05.912595 | orchestrator | Tuesday 31 March 2026 04:53:03 +0000 (0:00:00.146) 0:18:36.385 ********* 2026-03-31 04:53:05.912603 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:53:05.912612 | orchestrator | 2026-03-31 04:53:05.912620 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-31 04:53:05.912629 | orchestrator | Tuesday 31 March 2026 04:53:05 +0000 (0:00:01.420) 0:18:37.806 ********* 2026-03-31 04:53:05.912650 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:53:05.912658 | orchestrator | 2026-03-31 04:53:05.912665 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-31 04:53:05.912673 | orchestrator | Tuesday 31 March 2026 04:53:05 +0000 (0:00:00.144) 0:18:37.951 ********* 2026-03-31 04:53:05.912681 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-03-31 04:53:05.912688 | orchestrator | 2026-03-31 04:53:05.912696 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-31 04:53:05.912703 | orchestrator | Tuesday 31 March 2026 04:53:05 +0000 (0:00:00.475) 0:18:38.426 ********* 2026-03-31 04:53:05.912711 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:05.912718 | orchestrator | 2026-03-31 04:53:05.912726 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-31 04:53:05.912749 | orchestrator | Tuesday 31 March 2026 04:53:05 +0000 (0:00:00.151) 0:18:38.578 ********* 2026-03-31 04:53:27.347360 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:27.347549 | orchestrator | 2026-03-31 04:53:27.347569 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-31 04:53:27.347583 | orchestrator | Tuesday 31 March 2026 04:53:06 +0000 (0:00:00.159) 0:18:38.737 ********* 2026-03-31 04:53:27.347594 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:27.347605 | orchestrator | 2026-03-31 04:53:27.347617 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-31 04:53:27.347628 | orchestrator | Tuesday 31 March 2026 04:53:06 +0000 (0:00:00.140) 0:18:38.878 ********* 2026-03-31 04:53:27.347639 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:27.347650 | orchestrator | 2026-03-31 04:53:27.347662 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-31 04:53:27.347673 | orchestrator | Tuesday 31 March 2026 04:53:06 +0000 (0:00:00.156) 0:18:39.034 ********* 2026-03-31 04:53:27.347685 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:27.347697 | orchestrator | 2026-03-31 04:53:27.347708 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-31 04:53:27.347719 | orchestrator | Tuesday 31 March 2026 04:53:06 +0000 (0:00:00.144) 0:18:39.179 ********* 2026-03-31 04:53:27.347730 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:27.347741 | orchestrator | 2026-03-31 04:53:27.347752 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-31 04:53:27.347763 | orchestrator | Tuesday 31 March 2026 04:53:06 +0000 (0:00:00.152) 0:18:39.332 ********* 2026-03-31 04:53:27.347774 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:27.347785 | orchestrator | 2026-03-31 04:53:27.347797 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-31 04:53:27.347808 | orchestrator | Tuesday 31 March 2026 04:53:06 +0000 (0:00:00.145) 0:18:39.477 ********* 2026-03-31 04:53:27.347819 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:27.347830 | orchestrator | 2026-03-31 04:53:27.347841 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-31 04:53:27.347852 | orchestrator | Tuesday 31 March 2026 04:53:06 +0000 (0:00:00.145) 0:18:39.623 ********* 2026-03-31 04:53:27.347864 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:53:27.347876 | orchestrator | 2026-03-31 04:53:27.347887 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-31 04:53:27.347901 | orchestrator | Tuesday 31 March 2026 04:53:07 +0000 (0:00:00.211) 0:18:39.834 ********* 2026-03-31 04:53:27.347913 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-03-31 04:53:27.347926 | orchestrator | 2026-03-31 04:53:27.347939 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-31 04:53:27.347953 | orchestrator | Tuesday 31 March 2026 04:53:07 +0000 (0:00:00.183) 0:18:40.018 ********* 2026-03-31 04:53:27.347965 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-03-31 04:53:27.347978 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-31 04:53:27.347991 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-31 04:53:27.348033 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-31 04:53:27.348053 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-31 04:53:27.348073 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-31 04:53:27.348093 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-31 04:53:27.348111 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-31 04:53:27.348131 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-31 04:53:27.348145 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-31 04:53:27.348164 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-31 04:53:27.348183 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-31 04:53:27.348202 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-31 04:53:27.348221 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-31 04:53:27.348240 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-03-31 04:53:27.348257 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-03-31 04:53:27.348274 | orchestrator | 2026-03-31 04:53:27.348286 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-31 04:53:27.348297 | orchestrator | Tuesday 31 March 2026 04:53:13 +0000 (0:00:05.707) 0:18:45.725 ********* 2026-03-31 04:53:27.348308 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-03-31 04:53:27.348319 | orchestrator | 2026-03-31 04:53:27.348344 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-31 04:53:27.348356 | orchestrator | Tuesday 31 March 2026 04:53:13 +0000 (0:00:00.202) 0:18:45.928 ********* 2026-03-31 04:53:27.348367 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-31 04:53:27.348380 | orchestrator | 2026-03-31 04:53:27.348391 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-31 04:53:27.348434 | orchestrator | Tuesday 31 March 2026 04:53:13 +0000 (0:00:00.476) 0:18:46.405 ********* 2026-03-31 04:53:27.348445 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-31 04:53:27.348457 | orchestrator | 2026-03-31 04:53:27.348468 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-31 04:53:27.348479 | orchestrator | Tuesday 31 March 2026 04:53:14 +0000 (0:00:01.026) 0:18:47.432 ********* 2026-03-31 04:53:27.348490 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:27.348501 | orchestrator | 2026-03-31 04:53:27.348512 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-31 04:53:27.348544 | orchestrator | Tuesday 31 March 2026 04:53:14 +0000 (0:00:00.152) 0:18:47.584 ********* 2026-03-31 04:53:27.348555 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:27.348566 | orchestrator | 2026-03-31 04:53:27.348578 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-31 04:53:27.348589 | orchestrator | Tuesday 31 March 2026 04:53:15 +0000 (0:00:00.137) 0:18:47.722 ********* 2026-03-31 04:53:27.348600 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:27.348611 | orchestrator | 2026-03-31 04:53:27.348622 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-31 04:53:27.348633 | orchestrator | Tuesday 31 March 2026 04:53:15 +0000 (0:00:00.135) 0:18:47.857 ********* 2026-03-31 04:53:27.348644 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:27.348655 | orchestrator | 2026-03-31 04:53:27.348666 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-31 04:53:27.348677 | orchestrator | Tuesday 31 March 2026 04:53:15 +0000 (0:00:00.129) 0:18:47.986 ********* 2026-03-31 04:53:27.348688 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:27.348709 | orchestrator | 2026-03-31 04:53:27.348721 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-31 04:53:27.348732 | orchestrator | Tuesday 31 March 2026 04:53:15 +0000 (0:00:00.150) 0:18:48.137 ********* 2026-03-31 04:53:27.348743 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:27.348754 | orchestrator | 2026-03-31 04:53:27.348765 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-31 04:53:27.348776 | orchestrator | Tuesday 31 March 2026 04:53:15 +0000 (0:00:00.123) 0:18:48.260 ********* 2026-03-31 04:53:27.348787 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:27.348798 | orchestrator | 2026-03-31 04:53:27.348809 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-31 04:53:27.348820 | orchestrator | Tuesday 31 March 2026 04:53:15 +0000 (0:00:00.126) 0:18:48.387 ********* 2026-03-31 04:53:27.348831 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:27.348842 | orchestrator | 2026-03-31 04:53:27.348854 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-31 04:53:27.348865 | orchestrator | Tuesday 31 March 2026 04:53:15 +0000 (0:00:00.146) 0:18:48.533 ********* 2026-03-31 04:53:27.348876 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:27.348887 | orchestrator | 2026-03-31 04:53:27.348898 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-31 04:53:27.348909 | orchestrator | Tuesday 31 March 2026 04:53:16 +0000 (0:00:00.417) 0:18:48.951 ********* 2026-03-31 04:53:27.348920 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:27.348931 | orchestrator | 2026-03-31 04:53:27.348942 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-31 04:53:27.348953 | orchestrator | Tuesday 31 March 2026 04:53:16 +0000 (0:00:00.142) 0:18:49.093 ********* 2026-03-31 04:53:27.348964 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:53:27.348975 | orchestrator | 2026-03-31 04:53:27.348986 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-31 04:53:27.348997 | orchestrator | Tuesday 31 March 2026 04:53:16 +0000 (0:00:00.198) 0:18:49.292 ********* 2026-03-31 04:53:27.349008 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-03-31 04:53:27.349019 | orchestrator | 2026-03-31 04:53:27.349030 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-31 04:53:27.349041 | orchestrator | Tuesday 31 March 2026 04:53:20 +0000 (0:00:03.407) 0:18:52.700 ********* 2026-03-31 04:53:27.349052 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-31 04:53:27.349063 | orchestrator | 2026-03-31 04:53:27.349074 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-31 04:53:27.349085 | orchestrator | Tuesday 31 March 2026 04:53:20 +0000 (0:00:00.184) 0:18:52.885 ********* 2026-03-31 04:53:27.349098 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-03-31 04:53:27.349119 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-03-31 04:53:27.349132 | orchestrator | 2026-03-31 04:53:27.349144 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-31 04:53:27.349155 | orchestrator | Tuesday 31 March 2026 04:53:26 +0000 (0:00:06.687) 0:18:59.573 ********* 2026-03-31 04:53:27.349166 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:27.349177 | orchestrator | 2026-03-31 04:53:27.349195 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-31 04:53:27.349206 | orchestrator | Tuesday 31 March 2026 04:53:27 +0000 (0:00:00.151) 0:18:59.724 ********* 2026-03-31 04:53:27.349217 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:27.349228 | orchestrator | 2026-03-31 04:53:27.349240 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-31 04:53:27.349251 | orchestrator | Tuesday 31 March 2026 04:53:27 +0000 (0:00:00.152) 0:18:59.877 ********* 2026-03-31 04:53:27.349262 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:27.349273 | orchestrator | 2026-03-31 04:53:27.349284 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-31 04:53:27.349302 | orchestrator | Tuesday 31 March 2026 04:53:27 +0000 (0:00:00.136) 0:19:00.013 ********* 2026-03-31 04:53:47.275566 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:47.275708 | orchestrator | 2026-03-31 04:53:47.275739 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-31 04:53:47.275764 | orchestrator | Tuesday 31 March 2026 04:53:27 +0000 (0:00:00.157) 0:19:00.171 ********* 2026-03-31 04:53:47.275784 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:47.275806 | orchestrator | 2026-03-31 04:53:47.275825 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-31 04:53:47.275844 | orchestrator | Tuesday 31 March 2026 04:53:27 +0000 (0:00:00.172) 0:19:00.344 ********* 2026-03-31 04:53:47.275856 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:53:47.275868 | orchestrator | 2026-03-31 04:53:47.275879 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-31 04:53:47.275891 | orchestrator | Tuesday 31 March 2026 04:53:27 +0000 (0:00:00.251) 0:19:00.596 ********* 2026-03-31 04:53:47.275902 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-31 04:53:47.275915 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-31 04:53:47.275926 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-31 04:53:47.275937 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:47.275948 | orchestrator | 2026-03-31 04:53:47.275959 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-31 04:53:47.275970 | orchestrator | Tuesday 31 March 2026 04:53:28 +0000 (0:00:00.780) 0:19:01.376 ********* 2026-03-31 04:53:47.275981 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-31 04:53:47.275993 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-31 04:53:47.276004 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-31 04:53:47.276015 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:47.276026 | orchestrator | 2026-03-31 04:53:47.276037 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-31 04:53:47.276048 | orchestrator | Tuesday 31 March 2026 04:53:29 +0000 (0:00:01.126) 0:19:02.503 ********* 2026-03-31 04:53:47.276060 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-31 04:53:47.276072 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-31 04:53:47.276085 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-31 04:53:47.276098 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:47.276111 | orchestrator | 2026-03-31 04:53:47.276124 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-31 04:53:47.276136 | orchestrator | Tuesday 31 March 2026 04:53:30 +0000 (0:00:00.410) 0:19:02.913 ********* 2026-03-31 04:53:47.276149 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:53:47.276162 | orchestrator | 2026-03-31 04:53:47.276174 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-31 04:53:47.276187 | orchestrator | Tuesday 31 March 2026 04:53:30 +0000 (0:00:00.180) 0:19:03.093 ********* 2026-03-31 04:53:47.276199 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-31 04:53:47.276211 | orchestrator | 2026-03-31 04:53:47.276224 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-31 04:53:47.276261 | orchestrator | Tuesday 31 March 2026 04:53:30 +0000 (0:00:00.409) 0:19:03.503 ********* 2026-03-31 04:53:47.276272 | orchestrator | changed: [testbed-node-5] 2026-03-31 04:53:47.276283 | orchestrator | 2026-03-31 04:53:47.276295 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-31 04:53:47.276306 | orchestrator | Tuesday 31 March 2026 04:53:31 +0000 (0:00:00.827) 0:19:04.330 ********* 2026-03-31 04:53:47.276317 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:53:47.276328 | orchestrator | 2026-03-31 04:53:47.276340 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-31 04:53:47.276351 | orchestrator | Tuesday 31 March 2026 04:53:31 +0000 (0:00:00.133) 0:19:04.464 ********* 2026-03-31 04:53:47.276362 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:53:47.276373 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:53:47.276384 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:53:47.276395 | orchestrator | 2026-03-31 04:53:47.276406 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-31 04:53:47.276461 | orchestrator | Tuesday 31 March 2026 04:53:32 +0000 (0:00:00.941) 0:19:05.405 ********* 2026-03-31 04:53:47.276474 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-5 2026-03-31 04:53:47.276485 | orchestrator | 2026-03-31 04:53:47.276511 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-31 04:53:47.276523 | orchestrator | Tuesday 31 March 2026 04:53:32 +0000 (0:00:00.236) 0:19:05.642 ********* 2026-03-31 04:53:47.276534 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:47.276545 | orchestrator | 2026-03-31 04:53:47.276556 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-31 04:53:47.276567 | orchestrator | Tuesday 31 March 2026 04:53:33 +0000 (0:00:00.146) 0:19:05.788 ********* 2026-03-31 04:53:47.276578 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:47.276589 | orchestrator | 2026-03-31 04:53:47.276600 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-31 04:53:47.276610 | orchestrator | Tuesday 31 March 2026 04:53:33 +0000 (0:00:00.138) 0:19:05.927 ********* 2026-03-31 04:53:47.276621 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:53:47.276632 | orchestrator | 2026-03-31 04:53:47.276643 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-31 04:53:47.276654 | orchestrator | Tuesday 31 March 2026 04:53:33 +0000 (0:00:00.465) 0:19:06.392 ********* 2026-03-31 04:53:47.276665 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:53:47.276676 | orchestrator | 2026-03-31 04:53:47.276687 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-31 04:53:47.276703 | orchestrator | Tuesday 31 March 2026 04:53:34 +0000 (0:00:00.468) 0:19:06.861 ********* 2026-03-31 04:53:47.276750 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-31 04:53:47.276776 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-31 04:53:47.276795 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-31 04:53:47.276812 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-31 04:53:47.276829 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-31 04:53:47.276847 | orchestrator | 2026-03-31 04:53:47.276866 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-31 04:53:47.276886 | orchestrator | Tuesday 31 March 2026 04:53:35 +0000 (0:00:01.819) 0:19:08.680 ********* 2026-03-31 04:53:47.276904 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:47.276923 | orchestrator | 2026-03-31 04:53:47.276942 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-31 04:53:47.276976 | orchestrator | Tuesday 31 March 2026 04:53:36 +0000 (0:00:00.125) 0:19:08.806 ********* 2026-03-31 04:53:47.276995 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-5 2026-03-31 04:53:47.277013 | orchestrator | 2026-03-31 04:53:47.277031 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-31 04:53:47.277049 | orchestrator | Tuesday 31 March 2026 04:53:36 +0000 (0:00:00.213) 0:19:09.020 ********* 2026-03-31 04:53:47.277068 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-31 04:53:47.277087 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-03-31 04:53:47.277106 | orchestrator | 2026-03-31 04:53:47.277126 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-31 04:53:47.277145 | orchestrator | Tuesday 31 March 2026 04:53:37 +0000 (0:00:00.798) 0:19:09.818 ********* 2026-03-31 04:53:47.277158 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 04:53:47.277169 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-31 04:53:47.277180 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-31 04:53:47.277191 | orchestrator | 2026-03-31 04:53:47.277202 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-31 04:53:47.277213 | orchestrator | Tuesday 31 March 2026 04:53:39 +0000 (0:00:02.151) 0:19:11.970 ********* 2026-03-31 04:53:47.277223 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-03-31 04:53:47.277235 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-31 04:53:47.277246 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:53:47.277256 | orchestrator | 2026-03-31 04:53:47.277268 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-31 04:53:47.277282 | orchestrator | Tuesday 31 March 2026 04:53:40 +0000 (0:00:00.973) 0:19:12.943 ********* 2026-03-31 04:53:47.277306 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:47.277331 | orchestrator | 2026-03-31 04:53:47.277349 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-31 04:53:47.277367 | orchestrator | Tuesday 31 March 2026 04:53:40 +0000 (0:00:00.237) 0:19:13.181 ********* 2026-03-31 04:53:47.277384 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:47.277402 | orchestrator | 2026-03-31 04:53:47.277460 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-31 04:53:47.277482 | orchestrator | Tuesday 31 March 2026 04:53:40 +0000 (0:00:00.140) 0:19:13.322 ********* 2026-03-31 04:53:47.277500 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:47.277520 | orchestrator | 2026-03-31 04:53:47.277539 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-31 04:53:47.277557 | orchestrator | Tuesday 31 March 2026 04:53:40 +0000 (0:00:00.136) 0:19:13.459 ********* 2026-03-31 04:53:47.277572 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-5 2026-03-31 04:53:47.277583 | orchestrator | 2026-03-31 04:53:47.277594 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-31 04:53:47.277605 | orchestrator | Tuesday 31 March 2026 04:53:40 +0000 (0:00:00.196) 0:19:13.656 ********* 2026-03-31 04:53:47.277617 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:53:47.277635 | orchestrator | 2026-03-31 04:53:47.277654 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-31 04:53:47.277672 | orchestrator | Tuesday 31 March 2026 04:53:41 +0000 (0:00:00.756) 0:19:14.412 ********* 2026-03-31 04:53:47.277693 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:53:47.277711 | orchestrator | 2026-03-31 04:53:47.277730 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-31 04:53:47.277751 | orchestrator | Tuesday 31 March 2026 04:53:43 +0000 (0:00:02.227) 0:19:16.640 ********* 2026-03-31 04:53:47.277763 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-5 2026-03-31 04:53:47.277773 | orchestrator | 2026-03-31 04:53:47.277784 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-31 04:53:47.277806 | orchestrator | Tuesday 31 March 2026 04:53:44 +0000 (0:00:00.208) 0:19:16.848 ********* 2026-03-31 04:53:47.277818 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:53:47.277829 | orchestrator | 2026-03-31 04:53:47.277845 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-31 04:53:47.277864 | orchestrator | Tuesday 31 March 2026 04:53:45 +0000 (0:00:00.941) 0:19:17.790 ********* 2026-03-31 04:53:47.277882 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:53:47.277901 | orchestrator | 2026-03-31 04:53:47.277919 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-31 04:53:47.277938 | orchestrator | Tuesday 31 March 2026 04:53:45 +0000 (0:00:00.854) 0:19:18.644 ********* 2026-03-31 04:53:47.277957 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:53:47.277976 | orchestrator | 2026-03-31 04:53:47.277995 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-31 04:53:47.278014 | orchestrator | Tuesday 31 March 2026 04:53:47 +0000 (0:00:01.140) 0:19:19.785 ********* 2026-03-31 04:53:47.278107 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:53:47.278125 | orchestrator | 2026-03-31 04:53:47.278164 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-31 04:55:31.696386 | orchestrator | Tuesday 31 March 2026 04:53:47 +0000 (0:00:00.161) 0:19:19.946 ********* 2026-03-31 04:55:31.696474 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:55:31.696484 | orchestrator | 2026-03-31 04:55:31.696492 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-31 04:55:31.696499 | orchestrator | Tuesday 31 March 2026 04:53:47 +0000 (0:00:00.157) 0:19:20.104 ********* 2026-03-31 04:55:31.696505 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-31 04:55:31.696511 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-03-31 04:55:31.696517 | orchestrator | 2026-03-31 04:55:31.696548 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-31 04:55:31.696554 | orchestrator | Tuesday 31 March 2026 04:53:48 +0000 (0:00:00.831) 0:19:20.935 ********* 2026-03-31 04:55:31.696560 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-31 04:55:31.696567 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-03-31 04:55:31.696573 | orchestrator | 2026-03-31 04:55:31.696579 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-31 04:55:31.696585 | orchestrator | Tuesday 31 March 2026 04:53:50 +0000 (0:00:01.837) 0:19:22.773 ********* 2026-03-31 04:55:31.696592 | orchestrator | changed: [testbed-node-5] => (item=0) 2026-03-31 04:55:31.696599 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-03-31 04:55:31.696605 | orchestrator | 2026-03-31 04:55:31.696611 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-31 04:55:31.696617 | orchestrator | Tuesday 31 March 2026 04:53:53 +0000 (0:00:03.452) 0:19:26.225 ********* 2026-03-31 04:55:31.696624 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:55:31.696630 | orchestrator | 2026-03-31 04:55:31.696636 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-31 04:55:31.696642 | orchestrator | Tuesday 31 March 2026 04:53:53 +0000 (0:00:00.220) 0:19:26.446 ********* 2026-03-31 04:55:31.696648 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-03-31 04:55:31.696655 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-31 04:55:31.696662 | orchestrator | 2026-03-31 04:55:31.696668 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-31 04:55:31.696674 | orchestrator | Tuesday 31 March 2026 04:54:06 +0000 (0:00:13.007) 0:19:39.454 ********* 2026-03-31 04:55:31.696680 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:55:31.696687 | orchestrator | 2026-03-31 04:55:31.696693 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-03-31 04:55:31.696699 | orchestrator | Tuesday 31 March 2026 04:54:07 +0000 (0:00:00.316) 0:19:39.770 ********* 2026-03-31 04:55:31.696705 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:55:31.696711 | orchestrator | 2026-03-31 04:55:31.696736 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-03-31 04:55:31.696743 | orchestrator | Tuesday 31 March 2026 04:54:07 +0000 (0:00:00.122) 0:19:39.893 ********* 2026-03-31 04:55:31.696749 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:55:31.696755 | orchestrator | 2026-03-31 04:55:31.696761 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-03-31 04:55:31.696767 | orchestrator | Tuesday 31 March 2026 04:54:07 +0000 (0:00:00.126) 0:19:40.020 ********* 2026-03-31 04:55:31.696773 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-03-31 04:55:31.696779 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-31 04:55:31.696785 | orchestrator | 2026-03-31 04:55:31.696791 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-31 04:55:31.696797 | orchestrator | Tuesday 31 March 2026 04:54:11 +0000 (0:00:04.115) 0:19:44.136 ********* 2026-03-31 04:55:31.696803 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:55:31.696809 | orchestrator | 2026-03-31 04:55:31.696815 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-31 04:55:31.696820 | orchestrator | Tuesday 31 March 2026 04:54:11 +0000 (0:00:00.132) 0:19:44.268 ********* 2026-03-31 04:55:31.696826 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:55:31.696832 | orchestrator | 2026-03-31 04:55:31.696838 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-31 04:55:31.696844 | orchestrator | Tuesday 31 March 2026 04:54:11 +0000 (0:00:00.112) 0:19:44.380 ********* 2026-03-31 04:55:31.696850 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:55:31.696856 | orchestrator | 2026-03-31 04:55:31.696862 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-31 04:55:31.696878 | orchestrator | Tuesday 31 March 2026 04:54:11 +0000 (0:00:00.118) 0:19:44.498 ********* 2026-03-31 04:55:31.696884 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:55:31.696890 | orchestrator | 2026-03-31 04:55:31.696896 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-31 04:55:31.696903 | orchestrator | Tuesday 31 March 2026 04:54:11 +0000 (0:00:00.124) 0:19:44.623 ********* 2026-03-31 04:55:31.696909 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:55:31.696915 | orchestrator | 2026-03-31 04:55:31.696922 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-31 04:55:31.696929 | orchestrator | Tuesday 31 March 2026 04:54:12 +0000 (0:00:00.119) 0:19:44.742 ********* 2026-03-31 04:55:31.696935 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:55:31.696942 | orchestrator | 2026-03-31 04:55:31.696949 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-31 04:55:31.696955 | orchestrator | Tuesday 31 March 2026 04:54:12 +0000 (0:00:00.130) 0:19:44.873 ********* 2026-03-31 04:55:31.696962 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:55:31.696968 | orchestrator | 2026-03-31 04:55:31.696975 | orchestrator | PLAY [Complete osd upgrade] **************************************************** 2026-03-31 04:55:31.696982 | orchestrator | 2026-03-31 04:55:31.696989 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-31 04:55:31.696995 | orchestrator | Tuesday 31 March 2026 04:54:12 +0000 (0:00:00.646) 0:19:45.519 ********* 2026-03-31 04:55:31.697002 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:55:31.697009 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:55:31.697027 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:55:31.697034 | orchestrator | 2026-03-31 04:55:31.697041 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-31 04:55:31.697048 | orchestrator | Tuesday 31 March 2026 04:54:13 +0000 (0:00:00.663) 0:19:46.183 ********* 2026-03-31 04:55:31.697054 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:55:31.697061 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:55:31.697067 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:55:31.697074 | orchestrator | 2026-03-31 04:55:31.697081 | orchestrator | TASK [Re-enable pg autoscale on pools] ***************************************** 2026-03-31 04:55:31.697094 | orchestrator | Tuesday 31 March 2026 04:54:14 +0000 (0:00:00.600) 0:19:46.783 ********* 2026-03-31 04:55:31.697100 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-03-31 04:55:31.697107 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-03-31 04:55:31.697114 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-03-31 04:55:31.697121 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-03-31 04:55:31.697129 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-03-31 04:55:31.697136 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-03-31 04:55:31.697143 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-03-31 04:55:31.697150 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-03-31 04:55:31.697156 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-03-31 04:55:31.697163 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-03-31 04:55:31.697170 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-03-31 04:55:31.697177 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-03-31 04:55:31.697184 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-03-31 04:55:31.697191 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-03-31 04:55:31.697197 | orchestrator | 2026-03-31 04:55:31.697204 | orchestrator | TASK [Unset osd flags] ********************************************************* 2026-03-31 04:55:31.697210 | orchestrator | Tuesday 31 March 2026 04:55:22 +0000 (0:01:08.744) 0:20:55.528 ********* 2026-03-31 04:55:31.697217 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-03-31 04:55:31.697224 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-03-31 04:55:31.697230 | orchestrator | 2026-03-31 04:55:31.697237 | orchestrator | TASK [Re-enable balancer] ****************************************************** 2026-03-31 04:55:31.697243 | orchestrator | Tuesday 31 March 2026 04:55:27 +0000 (0:00:04.982) 0:21:00.510 ********* 2026-03-31 04:55:31.697250 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-31 04:55:31.697257 | orchestrator | 2026-03-31 04:55:31.697263 | orchestrator | PLAY [Upgrade ceph mdss cluster, deactivate all rank > 0] ********************** 2026-03-31 04:55:31.697270 | orchestrator | 2026-03-31 04:55:31.697277 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-31 04:55:31.697283 | orchestrator | Tuesday 31 March 2026 04:55:29 +0000 (0:00:02.002) 0:21:02.513 ********* 2026-03-31 04:55:31.697289 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-03-31 04:55:31.697295 | orchestrator | 2026-03-31 04:55:31.697301 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-31 04:55:31.697307 | orchestrator | Tuesday 31 March 2026 04:55:30 +0000 (0:00:00.278) 0:21:02.791 ********* 2026-03-31 04:55:31.697312 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:55:31.697318 | orchestrator | 2026-03-31 04:55:31.697327 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-31 04:55:31.697333 | orchestrator | Tuesday 31 March 2026 04:55:30 +0000 (0:00:00.480) 0:21:03.271 ********* 2026-03-31 04:55:31.697339 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:55:31.697345 | orchestrator | 2026-03-31 04:55:31.697350 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-31 04:55:31.697361 | orchestrator | Tuesday 31 March 2026 04:55:30 +0000 (0:00:00.150) 0:21:03.422 ********* 2026-03-31 04:55:31.697367 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:55:31.697372 | orchestrator | 2026-03-31 04:55:31.697378 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-31 04:55:31.697384 | orchestrator | Tuesday 31 March 2026 04:55:31 +0000 (0:00:00.472) 0:21:03.894 ********* 2026-03-31 04:55:31.697390 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:55:31.697395 | orchestrator | 2026-03-31 04:55:31.697401 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-31 04:55:31.697407 | orchestrator | Tuesday 31 March 2026 04:55:31 +0000 (0:00:00.156) 0:21:04.050 ********* 2026-03-31 04:55:31.697413 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:55:31.697418 | orchestrator | 2026-03-31 04:55:31.697424 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-31 04:55:31.697430 | orchestrator | Tuesday 31 March 2026 04:55:31 +0000 (0:00:00.148) 0:21:04.198 ********* 2026-03-31 04:55:31.697436 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:55:31.697441 | orchestrator | 2026-03-31 04:55:31.697451 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-31 04:55:40.304215 | orchestrator | Tuesday 31 March 2026 04:55:31 +0000 (0:00:00.163) 0:21:04.362 ********* 2026-03-31 04:55:40.304333 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:55:40.304351 | orchestrator | 2026-03-31 04:55:40.304364 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-31 04:55:40.304376 | orchestrator | Tuesday 31 March 2026 04:55:31 +0000 (0:00:00.156) 0:21:04.519 ********* 2026-03-31 04:55:40.304387 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:55:40.304399 | orchestrator | 2026-03-31 04:55:40.304410 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-31 04:55:40.304422 | orchestrator | Tuesday 31 March 2026 04:55:31 +0000 (0:00:00.148) 0:21:04.668 ********* 2026-03-31 04:55:40.304433 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-31 04:55:40.304444 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:55:40.304456 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:55:40.304467 | orchestrator | 2026-03-31 04:55:40.304478 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-31 04:55:40.304489 | orchestrator | Tuesday 31 March 2026 04:55:33 +0000 (0:00:01.304) 0:21:05.972 ********* 2026-03-31 04:55:40.304500 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:55:40.304511 | orchestrator | 2026-03-31 04:55:40.304522 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-31 04:55:40.304592 | orchestrator | Tuesday 31 March 2026 04:55:33 +0000 (0:00:00.289) 0:21:06.262 ********* 2026-03-31 04:55:40.304604 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-31 04:55:40.304616 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:55:40.304626 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:55:40.304637 | orchestrator | 2026-03-31 04:55:40.304648 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-31 04:55:40.304659 | orchestrator | Tuesday 31 March 2026 04:55:35 +0000 (0:00:01.841) 0:21:08.104 ********* 2026-03-31 04:55:40.304670 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-31 04:55:40.304681 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-31 04:55:40.304692 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-31 04:55:40.304703 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:55:40.304713 | orchestrator | 2026-03-31 04:55:40.304724 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-31 04:55:40.304735 | orchestrator | Tuesday 31 March 2026 04:55:35 +0000 (0:00:00.449) 0:21:08.553 ********* 2026-03-31 04:55:40.304774 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-31 04:55:40.304790 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-31 04:55:40.304803 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-31 04:55:40.304816 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:55:40.304828 | orchestrator | 2026-03-31 04:55:40.304840 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-31 04:55:40.304856 | orchestrator | Tuesday 31 March 2026 04:55:36 +0000 (0:00:00.683) 0:21:09.237 ********* 2026-03-31 04:55:40.304895 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 04:55:40.304911 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 04:55:40.304924 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 04:55:40.304955 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:55:40.304968 | orchestrator | 2026-03-31 04:55:40.304981 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-31 04:55:40.304993 | orchestrator | Tuesday 31 March 2026 04:55:36 +0000 (0:00:00.203) 0:21:09.440 ********* 2026-03-31 04:55:40.305009 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '2a470704af4f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-31 04:55:34.081097', 'end': '2026-03-31 04:55:34.128692', 'delta': '0:00:00.047595', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2a470704af4f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-31 04:55:40.305025 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '72281537ffe8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-31 04:55:34.637326', 'end': '2026-03-31 04:55:34.686034', 'delta': '0:00:00.048708', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['72281537ffe8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-31 04:55:40.305048 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '4f3969f3506a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-31 04:55:35.217153', 'end': '2026-03-31 04:55:35.270436', 'delta': '0:00:00.053283', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['4f3969f3506a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-31 04:55:40.305061 | orchestrator | 2026-03-31 04:55:40.305074 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-31 04:55:40.305086 | orchestrator | Tuesday 31 March 2026 04:55:37 +0000 (0:00:00.245) 0:21:09.686 ********* 2026-03-31 04:55:40.305099 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:55:40.305111 | orchestrator | 2026-03-31 04:55:40.305122 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-31 04:55:40.305133 | orchestrator | Tuesday 31 March 2026 04:55:37 +0000 (0:00:00.285) 0:21:09.971 ********* 2026-03-31 04:55:40.305144 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:55:40.305155 | orchestrator | 2026-03-31 04:55:40.305166 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-31 04:55:40.305177 | orchestrator | Tuesday 31 March 2026 04:55:37 +0000 (0:00:00.250) 0:21:10.222 ********* 2026-03-31 04:55:40.305188 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:55:40.305199 | orchestrator | 2026-03-31 04:55:40.305215 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-31 04:55:40.305227 | orchestrator | Tuesday 31 March 2026 04:55:37 +0000 (0:00:00.170) 0:21:10.392 ********* 2026-03-31 04:55:40.305238 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:55:40.305249 | orchestrator | 2026-03-31 04:55:40.305259 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-31 04:55:40.305271 | orchestrator | Tuesday 31 March 2026 04:55:38 +0000 (0:00:01.043) 0:21:11.436 ********* 2026-03-31 04:55:40.305281 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:55:40.305292 | orchestrator | 2026-03-31 04:55:40.305303 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-31 04:55:40.305314 | orchestrator | Tuesday 31 March 2026 04:55:38 +0000 (0:00:00.153) 0:21:11.590 ********* 2026-03-31 04:55:40.305325 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:55:40.305336 | orchestrator | 2026-03-31 04:55:40.305347 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-31 04:55:40.305358 | orchestrator | Tuesday 31 March 2026 04:55:39 +0000 (0:00:00.161) 0:21:11.751 ********* 2026-03-31 04:55:40.305369 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:55:40.305380 | orchestrator | 2026-03-31 04:55:40.305391 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-31 04:55:40.305402 | orchestrator | Tuesday 31 March 2026 04:55:40 +0000 (0:00:01.067) 0:21:12.818 ********* 2026-03-31 04:55:40.305413 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:55:40.305424 | orchestrator | 2026-03-31 04:55:40.305441 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-31 04:55:41.884066 | orchestrator | Tuesday 31 March 2026 04:55:40 +0000 (0:00:00.153) 0:21:12.972 ********* 2026-03-31 04:55:41.884160 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:55:41.884175 | orchestrator | 2026-03-31 04:55:41.884188 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-31 04:55:41.884198 | orchestrator | Tuesday 31 March 2026 04:55:40 +0000 (0:00:00.170) 0:21:13.143 ********* 2026-03-31 04:55:41.884231 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:55:41.884242 | orchestrator | 2026-03-31 04:55:41.884252 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-31 04:55:41.884262 | orchestrator | Tuesday 31 March 2026 04:55:40 +0000 (0:00:00.138) 0:21:13.281 ********* 2026-03-31 04:55:41.884272 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:55:41.884282 | orchestrator | 2026-03-31 04:55:41.884292 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-31 04:55:41.884301 | orchestrator | Tuesday 31 March 2026 04:55:40 +0000 (0:00:00.180) 0:21:13.461 ********* 2026-03-31 04:55:41.884311 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:55:41.884321 | orchestrator | 2026-03-31 04:55:41.884330 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-31 04:55:41.884366 | orchestrator | Tuesday 31 March 2026 04:55:40 +0000 (0:00:00.166) 0:21:13.628 ********* 2026-03-31 04:55:41.884377 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:55:41.884386 | orchestrator | 2026-03-31 04:55:41.884396 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-31 04:55:41.884406 | orchestrator | Tuesday 31 March 2026 04:55:41 +0000 (0:00:00.152) 0:21:13.781 ********* 2026-03-31 04:55:41.884416 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:55:41.884426 | orchestrator | 2026-03-31 04:55:41.884435 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-31 04:55:41.884445 | orchestrator | Tuesday 31 March 2026 04:55:41 +0000 (0:00:00.160) 0:21:13.941 ********* 2026-03-31 04:55:41.884457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:55:41.884470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:55:41.884480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:55:41.884505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-31-01-38-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-31 04:55:41.884518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:55:41.884566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:55:41.884597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:55:41.884616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '61782125', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part16', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part14', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part15', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part1', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-31 04:55:41.884631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:55:41.884648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:55:41.884660 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:55:41.884671 | orchestrator | 2026-03-31 04:55:41.884683 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-31 04:55:41.884702 | orchestrator | Tuesday 31 March 2026 04:55:41 +0000 (0:00:00.332) 0:21:14.274 ********* 2026-03-31 04:55:41.884722 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:55:43.032628 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:55:43.032766 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:55:43.032813 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-31-01-38-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:55:43.032836 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:55:43.032878 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:55:43.032925 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:55:43.032969 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '61782125', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part16', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part14', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part15', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part1', 'scsi-SQEMU_QEMU_HARDDISK_61782125-295c-4c38-b420-ceea0e244801-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:55:43.032985 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:55:43.033003 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:55:43.033023 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:55:43.033037 | orchestrator | 2026-03-31 04:55:43.033049 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-31 04:55:43.033061 | orchestrator | Tuesday 31 March 2026 04:55:41 +0000 (0:00:00.286) 0:21:14.560 ********* 2026-03-31 04:55:43.033073 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:55:43.033085 | orchestrator | 2026-03-31 04:55:43.033096 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-31 04:55:43.033108 | orchestrator | Tuesday 31 March 2026 04:55:42 +0000 (0:00:00.482) 0:21:15.043 ********* 2026-03-31 04:55:43.033119 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:55:43.033130 | orchestrator | 2026-03-31 04:55:43.033216 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-31 04:55:43.033228 | orchestrator | Tuesday 31 March 2026 04:55:42 +0000 (0:00:00.152) 0:21:15.195 ********* 2026-03-31 04:55:43.033239 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:55:43.033250 | orchestrator | 2026-03-31 04:55:43.033261 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-31 04:55:43.033281 | orchestrator | Tuesday 31 March 2026 04:55:43 +0000 (0:00:00.505) 0:21:15.700 ********* 2026-03-31 04:56:10.429092 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:56:10.429211 | orchestrator | 2026-03-31 04:56:10.429229 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-31 04:56:10.429243 | orchestrator | Tuesday 31 March 2026 04:55:43 +0000 (0:00:00.489) 0:21:16.190 ********* 2026-03-31 04:56:10.429254 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:56:10.429266 | orchestrator | 2026-03-31 04:56:10.429278 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-31 04:56:10.429289 | orchestrator | Tuesday 31 March 2026 04:55:43 +0000 (0:00:00.249) 0:21:16.440 ********* 2026-03-31 04:56:10.429300 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:56:10.429311 | orchestrator | 2026-03-31 04:56:10.429322 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-31 04:56:10.429334 | orchestrator | Tuesday 31 March 2026 04:55:43 +0000 (0:00:00.163) 0:21:16.603 ********* 2026-03-31 04:56:10.429345 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-31 04:56:10.429357 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-31 04:56:10.429368 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-31 04:56:10.429379 | orchestrator | 2026-03-31 04:56:10.429390 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-31 04:56:10.429401 | orchestrator | Tuesday 31 March 2026 04:55:44 +0000 (0:00:00.761) 0:21:17.365 ********* 2026-03-31 04:56:10.429412 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-31 04:56:10.429424 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-31 04:56:10.429435 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-31 04:56:10.429446 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:56:10.429458 | orchestrator | 2026-03-31 04:56:10.429469 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-31 04:56:10.429480 | orchestrator | Tuesday 31 March 2026 04:55:44 +0000 (0:00:00.239) 0:21:17.604 ********* 2026-03-31 04:56:10.429491 | orchestrator | skipping: [testbed-node-0] 2026-03-31 04:56:10.429502 | orchestrator | 2026-03-31 04:56:10.429513 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-31 04:56:10.429525 | orchestrator | Tuesday 31 March 2026 04:55:45 +0000 (0:00:00.179) 0:21:17.783 ********* 2026-03-31 04:56:10.429536 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-31 04:56:10.429547 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:56:10.429619 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:56:10.429634 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-31 04:56:10.429647 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-31 04:56:10.429659 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-31 04:56:10.429672 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-31 04:56:10.429685 | orchestrator | 2026-03-31 04:56:10.429698 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-31 04:56:10.429710 | orchestrator | Tuesday 31 March 2026 04:55:45 +0000 (0:00:00.858) 0:21:18.642 ********* 2026-03-31 04:56:10.429723 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-31 04:56:10.429735 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:56:10.429747 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:56:10.429759 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-31 04:56:10.429771 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-31 04:56:10.429785 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-31 04:56:10.429798 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-31 04:56:10.429810 | orchestrator | 2026-03-31 04:56:10.429821 | orchestrator | TASK [Set max_mds 1 on ceph fs] ************************************************ 2026-03-31 04:56:10.429845 | orchestrator | Tuesday 31 March 2026 04:55:47 +0000 (0:00:01.693) 0:21:20.335 ********* 2026-03-31 04:56:10.429857 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:56:10.429868 | orchestrator | 2026-03-31 04:56:10.429879 | orchestrator | TASK [Wait until only rank 0 is up] ******************************************** 2026-03-31 04:56:10.429890 | orchestrator | Tuesday 31 March 2026 04:55:49 +0000 (0:00:02.026) 0:21:22.362 ********* 2026-03-31 04:56:10.429901 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:56:10.429912 | orchestrator | 2026-03-31 04:56:10.429922 | orchestrator | TASK [Get name of remaining active mds] **************************************** 2026-03-31 04:56:10.429933 | orchestrator | Tuesday 31 March 2026 04:55:51 +0000 (0:00:01.911) 0:21:24.273 ********* 2026-03-31 04:56:10.429944 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:56:10.429955 | orchestrator | 2026-03-31 04:56:10.429966 | orchestrator | TASK [Set_fact mds_active_name] ************************************************ 2026-03-31 04:56:10.429977 | orchestrator | Tuesday 31 March 2026 04:55:52 +0000 (0:00:01.077) 0:21:25.350 ********* 2026-03-31 04:56:10.430140 | orchestrator | ok: [testbed-node-0] => (item={'key': 'gid_4693', 'value': {'gid': 4693, 'name': 'testbed-node-5', 'rank': 0, 'incarnation': 4, 'state': 'up:active', 'state_seq': 2, 'addr': '192.168.16.15:6817/2631324454', 'addrs': {'addrvec': [{'type': 'v2', 'addr': '192.168.16.15:6816', 'nonce': 2631324454}, {'type': 'v1', 'addr': '192.168.16.15:6817', 'nonce': 2631324454}]}, 'join_fscid': -1, 'export_targets': [], 'features': 4540138322906710015, 'flags': 0, 'compat': {'compat': {}, 'ro_compat': {}, 'incompat': {'feature_1': 'base v0.20', 'feature_2': 'client writeable ranges', 'feature_3': 'default file layouts on dirs', 'feature_4': 'dir inode in separate object', 'feature_5': 'mds uses versioned encoding', 'feature_6': 'dirfrag is stored in omap', 'feature_7': 'mds uses inline data', 'feature_8': 'no anchor table', 'feature_9': 'file layout v2', 'feature_10': 'snaprealm v2'}}}}) 2026-03-31 04:56:10.430173 | orchestrator | 2026-03-31 04:56:10.430184 | orchestrator | TASK [Set_fact mds_active_host] ************************************************ 2026-03-31 04:56:10.430195 | orchestrator | Tuesday 31 March 2026 04:55:52 +0000 (0:00:00.173) 0:21:25.524 ********* 2026-03-31 04:56:10.430206 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-31 04:56:10.430228 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-31 04:56:10.430240 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-5) 2026-03-31 04:56:10.430251 | orchestrator | 2026-03-31 04:56:10.430262 | orchestrator | TASK [Create standby_mdss group] *********************************************** 2026-03-31 04:56:10.430273 | orchestrator | Tuesday 31 March 2026 04:55:54 +0000 (0:00:01.221) 0:21:26.745 ********* 2026-03-31 04:56:10.430284 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-4) 2026-03-31 04:56:10.430295 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-3) 2026-03-31 04:56:10.430306 | orchestrator | 2026-03-31 04:56:10.430318 | orchestrator | TASK [Stop standby ceph mds] *************************************************** 2026-03-31 04:56:10.430329 | orchestrator | Tuesday 31 March 2026 04:55:54 +0000 (0:00:00.514) 0:21:27.259 ********* 2026-03-31 04:56:10.430340 | orchestrator | changed: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-31 04:56:10.430351 | orchestrator | changed: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-31 04:56:10.430362 | orchestrator | 2026-03-31 04:56:10.430373 | orchestrator | TASK [Mask systemd units for standby ceph mds] ********************************* 2026-03-31 04:56:10.430384 | orchestrator | Tuesday 31 March 2026 04:56:03 +0000 (0:00:09.025) 0:21:36.285 ********* 2026-03-31 04:56:10.430395 | orchestrator | changed: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-31 04:56:10.430406 | orchestrator | changed: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-31 04:56:10.430417 | orchestrator | 2026-03-31 04:56:10.430428 | orchestrator | TASK [Wait until all standbys mds are stopped] ********************************* 2026-03-31 04:56:10.430439 | orchestrator | Tuesday 31 March 2026 04:56:06 +0000 (0:00:02.821) 0:21:39.106 ********* 2026-03-31 04:56:10.430450 | orchestrator | ok: [testbed-node-0] 2026-03-31 04:56:10.430461 | orchestrator | 2026-03-31 04:56:10.430472 | orchestrator | TASK [Create active_mdss group] ************************************************ 2026-03-31 04:56:10.430483 | orchestrator | Tuesday 31 March 2026 04:56:07 +0000 (0:00:01.138) 0:21:40.245 ********* 2026-03-31 04:56:10.430494 | orchestrator | changed: [testbed-node-0] 2026-03-31 04:56:10.430505 | orchestrator | 2026-03-31 04:56:10.430516 | orchestrator | PLAY [Upgrade active mds] ****************************************************** 2026-03-31 04:56:10.430527 | orchestrator | 2026-03-31 04:56:10.430538 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-31 04:56:10.430549 | orchestrator | Tuesday 31 March 2026 04:56:07 +0000 (0:00:00.419) 0:21:40.665 ********* 2026-03-31 04:56:10.430560 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-03-31 04:56:10.430596 | orchestrator | 2026-03-31 04:56:10.430608 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-31 04:56:10.430619 | orchestrator | Tuesday 31 March 2026 04:56:08 +0000 (0:00:00.262) 0:21:40.927 ********* 2026-03-31 04:56:10.430629 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:56:10.430640 | orchestrator | 2026-03-31 04:56:10.430651 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-31 04:56:10.430662 | orchestrator | Tuesday 31 March 2026 04:56:08 +0000 (0:00:00.463) 0:21:41.391 ********* 2026-03-31 04:56:10.430673 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:56:10.430684 | orchestrator | 2026-03-31 04:56:10.430695 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-31 04:56:10.430723 | orchestrator | Tuesday 31 March 2026 04:56:08 +0000 (0:00:00.137) 0:21:41.529 ********* 2026-03-31 04:56:10.430735 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:56:10.430746 | orchestrator | 2026-03-31 04:56:10.430758 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-31 04:56:10.430769 | orchestrator | Tuesday 31 March 2026 04:56:09 +0000 (0:00:00.462) 0:21:41.991 ********* 2026-03-31 04:56:10.430780 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:56:10.430791 | orchestrator | 2026-03-31 04:56:10.430810 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-31 04:56:10.430821 | orchestrator | Tuesday 31 March 2026 04:56:09 +0000 (0:00:00.149) 0:21:42.141 ********* 2026-03-31 04:56:10.430874 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:56:10.430887 | orchestrator | 2026-03-31 04:56:10.430898 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-31 04:56:10.430909 | orchestrator | Tuesday 31 March 2026 04:56:09 +0000 (0:00:00.437) 0:21:42.579 ********* 2026-03-31 04:56:10.430920 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:56:10.430931 | orchestrator | 2026-03-31 04:56:10.430942 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-31 04:56:10.430953 | orchestrator | Tuesday 31 March 2026 04:56:10 +0000 (0:00:00.195) 0:21:42.774 ********* 2026-03-31 04:56:10.430964 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:10.430975 | orchestrator | 2026-03-31 04:56:10.430986 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-31 04:56:10.430997 | orchestrator | Tuesday 31 March 2026 04:56:10 +0000 (0:00:00.162) 0:21:42.937 ********* 2026-03-31 04:56:10.431008 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:56:10.431019 | orchestrator | 2026-03-31 04:56:10.431040 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-31 04:56:18.303529 | orchestrator | Tuesday 31 March 2026 04:56:10 +0000 (0:00:00.157) 0:21:43.094 ********* 2026-03-31 04:56:18.303666 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:56:18.303679 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:56:18.303688 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:56:18.303697 | orchestrator | 2026-03-31 04:56:18.303706 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-31 04:56:18.303715 | orchestrator | Tuesday 31 March 2026 04:56:11 +0000 (0:00:00.748) 0:21:43.842 ********* 2026-03-31 04:56:18.303723 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:56:18.303734 | orchestrator | 2026-03-31 04:56:18.303742 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-31 04:56:18.303750 | orchestrator | Tuesday 31 March 2026 04:56:11 +0000 (0:00:00.266) 0:21:44.109 ********* 2026-03-31 04:56:18.303758 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:56:18.303766 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:56:18.303774 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:56:18.303782 | orchestrator | 2026-03-31 04:56:18.303790 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-31 04:56:18.303798 | orchestrator | Tuesday 31 March 2026 04:56:13 +0000 (0:00:01.886) 0:21:45.996 ********* 2026-03-31 04:56:18.303808 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-31 04:56:18.303816 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-31 04:56:18.303824 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-31 04:56:18.303832 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:18.303841 | orchestrator | 2026-03-31 04:56:18.303849 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-31 04:56:18.303857 | orchestrator | Tuesday 31 March 2026 04:56:13 +0000 (0:00:00.475) 0:21:46.471 ********* 2026-03-31 04:56:18.303866 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-31 04:56:18.303877 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-31 04:56:18.303906 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-31 04:56:18.303915 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:18.303924 | orchestrator | 2026-03-31 04:56:18.303932 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-31 04:56:18.303940 | orchestrator | Tuesday 31 March 2026 04:56:14 +0000 (0:00:01.184) 0:21:47.655 ********* 2026-03-31 04:56:18.303950 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 04:56:18.303973 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 04:56:18.303982 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 04:56:18.303991 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:18.303999 | orchestrator | 2026-03-31 04:56:18.304007 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-31 04:56:18.304015 | orchestrator | Tuesday 31 March 2026 04:56:15 +0000 (0:00:00.178) 0:21:47.834 ********* 2026-03-31 04:56:18.304039 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '2a470704af4f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-31 04:56:11.949130', 'end': '2026-03-31 04:56:11.994422', 'delta': '0:00:00.045292', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2a470704af4f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-31 04:56:18.304050 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '72281537ffe8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-31 04:56:12.487964', 'end': '2026-03-31 04:56:12.545474', 'delta': '0:00:00.057510', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['72281537ffe8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-31 04:56:18.304059 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '4f3969f3506a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-31 04:56:13.090596', 'end': '2026-03-31 04:56:13.146591', 'delta': '0:00:00.055995', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['4f3969f3506a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-31 04:56:18.304074 | orchestrator | 2026-03-31 04:56:18.304083 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-31 04:56:18.304093 | orchestrator | Tuesday 31 March 2026 04:56:15 +0000 (0:00:00.236) 0:21:48.070 ********* 2026-03-31 04:56:18.304102 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:56:18.304111 | orchestrator | 2026-03-31 04:56:18.304120 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-31 04:56:18.304129 | orchestrator | Tuesday 31 March 2026 04:56:15 +0000 (0:00:00.267) 0:21:48.338 ********* 2026-03-31 04:56:18.304139 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:18.304148 | orchestrator | 2026-03-31 04:56:18.304157 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-31 04:56:18.304166 | orchestrator | Tuesday 31 March 2026 04:56:15 +0000 (0:00:00.250) 0:21:48.589 ********* 2026-03-31 04:56:18.304175 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:56:18.304184 | orchestrator | 2026-03-31 04:56:18.304194 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-31 04:56:18.304203 | orchestrator | Tuesday 31 March 2026 04:56:16 +0000 (0:00:00.449) 0:21:49.038 ********* 2026-03-31 04:56:18.304212 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-31 04:56:18.304225 | orchestrator | 2026-03-31 04:56:18.304234 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-31 04:56:18.304244 | orchestrator | Tuesday 31 March 2026 04:56:17 +0000 (0:00:00.979) 0:21:50.018 ********* 2026-03-31 04:56:18.304253 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:56:18.304262 | orchestrator | 2026-03-31 04:56:18.304271 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-31 04:56:18.304281 | orchestrator | Tuesday 31 March 2026 04:56:17 +0000 (0:00:00.159) 0:21:50.177 ********* 2026-03-31 04:56:18.304290 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:18.304299 | orchestrator | 2026-03-31 04:56:18.304308 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-31 04:56:18.304318 | orchestrator | Tuesday 31 March 2026 04:56:17 +0000 (0:00:00.136) 0:21:50.314 ********* 2026-03-31 04:56:18.304326 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:18.304336 | orchestrator | 2026-03-31 04:56:18.304345 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-31 04:56:18.304354 | orchestrator | Tuesday 31 March 2026 04:56:17 +0000 (0:00:00.235) 0:21:50.549 ********* 2026-03-31 04:56:18.304364 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:18.304373 | orchestrator | 2026-03-31 04:56:18.304382 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-31 04:56:18.304392 | orchestrator | Tuesday 31 March 2026 04:56:17 +0000 (0:00:00.129) 0:21:50.679 ********* 2026-03-31 04:56:18.304401 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:18.304410 | orchestrator | 2026-03-31 04:56:18.304419 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-31 04:56:18.304429 | orchestrator | Tuesday 31 March 2026 04:56:18 +0000 (0:00:00.132) 0:21:50.811 ********* 2026-03-31 04:56:18.304443 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:56:19.213990 | orchestrator | 2026-03-31 04:56:19.214153 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-31 04:56:19.214172 | orchestrator | Tuesday 31 March 2026 04:56:18 +0000 (0:00:00.166) 0:21:50.978 ********* 2026-03-31 04:56:19.214184 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:19.214197 | orchestrator | 2026-03-31 04:56:19.214208 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-31 04:56:19.214247 | orchestrator | Tuesday 31 March 2026 04:56:18 +0000 (0:00:00.148) 0:21:51.126 ********* 2026-03-31 04:56:19.214259 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:56:19.214271 | orchestrator | 2026-03-31 04:56:19.214282 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-31 04:56:19.214293 | orchestrator | Tuesday 31 March 2026 04:56:18 +0000 (0:00:00.204) 0:21:51.330 ********* 2026-03-31 04:56:19.214303 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:19.214314 | orchestrator | 2026-03-31 04:56:19.214325 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-31 04:56:19.214337 | orchestrator | Tuesday 31 March 2026 04:56:18 +0000 (0:00:00.144) 0:21:51.475 ********* 2026-03-31 04:56:19.214347 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:56:19.214358 | orchestrator | 2026-03-31 04:56:19.214369 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-31 04:56:19.214380 | orchestrator | Tuesday 31 March 2026 04:56:18 +0000 (0:00:00.185) 0:21:51.660 ********* 2026-03-31 04:56:19.214393 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:56:19.214410 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--185c377e--da3e--5428--98db--747be321d2f9-osd--block--185c377e--da3e--5428--98db--747be321d2f9', 'dm-uuid-LVM-x16wR0JSkJwOUat6KB2RjtOnd6k2ruBp3Senp6or7C3BHvrbv8KuFHdSdmwvdICC'], 'uuids': ['4a48fb33-b599-4c4d-a815-d018d343a3ff'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0036be6c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['3Senp6-or7C-3BHv-rbv8-KuFH-dSdm-wvdICC']}})  2026-03-31 04:56:19.214424 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1382055-b12a-4a0d-90b0-6b0bf5b2002d', 'scsi-SQEMU_QEMU_HARDDISK_d1382055-b12a-4a0d-90b0-6b0bf5b2002d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd1382055', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-31 04:56:19.214451 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-bwm83I-k31i-pwme-XT9I-9Z0g-1hP0-CwgXOd', 'scsi-0QEMU_QEMU_HARDDISK_cee620fc-9fd6-4c5e-b237-9b955e0088ae', 'scsi-SQEMU_QEMU_HARDDISK_cee620fc-9fd6-4c5e-b237-9b955e0088ae'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'cee620fc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--07ced279--a583--5107--8220--95f80fc10ac7-osd--block--07ced279--a583--5107--8220--95f80fc10ac7']}})  2026-03-31 04:56:19.214465 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:56:19.214502 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:56:19.214516 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-31-01-38-44-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-31 04:56:19.214530 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:56:19.214544 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-yKTWsV-enR4-4CqY-2klB-eRO2-fR5A-XJ6GI1', 'dm-uuid-CRYPT-LUKS2-74b5eafc2cf149539043240c66b113f2-yKTWsV-enR4-4CqY-2klB-eRO2-fR5A-XJ6GI1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-31 04:56:19.214557 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:56:19.214603 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--07ced279--a583--5107--8220--95f80fc10ac7-osd--block--07ced279--a583--5107--8220--95f80fc10ac7', 'dm-uuid-LVM-4Lb9QdMZv1ai74sfHiNB7SWQCThlMxSwyKTWsVenR44CqY2klBeRO2fR5AXJ6GI1'], 'uuids': ['74b5eafc-2cf1-4953-9043-240c66b113f2'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'cee620fc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['yKTWsV-enR4-4CqY-2klB-eRO2-fR5A-XJ6GI1']}})  2026-03-31 04:56:19.214620 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-zgTsa4-r5F1-H4rU-9oqC-nOys-qaba-d4ei1Y', 'scsi-0QEMU_QEMU_HARDDISK_0036be6c-41d0-4a1c-804a-c8bed222bda7', 'scsi-SQEMU_QEMU_HARDDISK_0036be6c-41d0-4a1c-804a-c8bed222bda7'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0036be6c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--185c377e--da3e--5428--98db--747be321d2f9-osd--block--185c377e--da3e--5428--98db--747be321d2f9']}})  2026-03-31 04:56:19.214650 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:56:19.581655 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f91d726b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part16', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part14', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part15', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part1', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-31 04:56:19.581755 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:56:19.581788 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:56:19.581802 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-3Senp6-or7C-3BHv-rbv8-KuFH-dSdm-wvdICC', 'dm-uuid-CRYPT-LUKS2-4a48fb33b5994c4da815d018d343a3ff-3Senp6-or7C-3BHv-rbv8-KuFH-dSdm-wvdICC'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-31 04:56:19.581834 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:19.581846 | orchestrator | 2026-03-31 04:56:19.581857 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-31 04:56:19.581868 | orchestrator | Tuesday 31 March 2026 04:56:19 +0000 (0:00:00.368) 0:21:52.029 ********* 2026-03-31 04:56:19.581897 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:56:19.581909 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--185c377e--da3e--5428--98db--747be321d2f9-osd--block--185c377e--da3e--5428--98db--747be321d2f9', 'dm-uuid-LVM-x16wR0JSkJwOUat6KB2RjtOnd6k2ruBp3Senp6or7C3BHvrbv8KuFHdSdmwvdICC'], 'uuids': ['4a48fb33-b599-4c4d-a815-d018d343a3ff'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0036be6c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['3Senp6-or7C-3BHv-rbv8-KuFH-dSdm-wvdICC']}}, 'ansible_loop_var': 'item'})  2026-03-31 04:56:19.581921 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1382055-b12a-4a0d-90b0-6b0bf5b2002d', 'scsi-SQEMU_QEMU_HARDDISK_d1382055-b12a-4a0d-90b0-6b0bf5b2002d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd1382055', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:56:19.581939 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-bwm83I-k31i-pwme-XT9I-9Z0g-1hP0-CwgXOd', 'scsi-0QEMU_QEMU_HARDDISK_cee620fc-9fd6-4c5e-b237-9b955e0088ae', 'scsi-SQEMU_QEMU_HARDDISK_cee620fc-9fd6-4c5e-b237-9b955e0088ae'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'cee620fc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--07ced279--a583--5107--8220--95f80fc10ac7-osd--block--07ced279--a583--5107--8220--95f80fc10ac7']}}, 'ansible_loop_var': 'item'})  2026-03-31 04:56:19.581951 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:56:19.581975 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:56:20.053828 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-31-01-38-44-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:56:20.053932 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:56:20.053947 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-yKTWsV-enR4-4CqY-2klB-eRO2-fR5A-XJ6GI1', 'dm-uuid-CRYPT-LUKS2-74b5eafc2cf149539043240c66b113f2-yKTWsV-enR4-4CqY-2klB-eRO2-fR5A-XJ6GI1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:56:20.053977 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:56:20.054071 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--07ced279--a583--5107--8220--95f80fc10ac7-osd--block--07ced279--a583--5107--8220--95f80fc10ac7', 'dm-uuid-LVM-4Lb9QdMZv1ai74sfHiNB7SWQCThlMxSwyKTWsVenR44CqY2klBeRO2fR5AXJ6GI1'], 'uuids': ['74b5eafc-2cf1-4953-9043-240c66b113f2'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'cee620fc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['yKTWsV-enR4-4CqY-2klB-eRO2-fR5A-XJ6GI1']}}, 'ansible_loop_var': 'item'})  2026-03-31 04:56:20.054110 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-zgTsa4-r5F1-H4rU-9oqC-nOys-qaba-d4ei1Y', 'scsi-0QEMU_QEMU_HARDDISK_0036be6c-41d0-4a1c-804a-c8bed222bda7', 'scsi-SQEMU_QEMU_HARDDISK_0036be6c-41d0-4a1c-804a-c8bed222bda7'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0036be6c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--185c377e--da3e--5428--98db--747be321d2f9-osd--block--185c377e--da3e--5428--98db--747be321d2f9']}}, 'ansible_loop_var': 'item'})  2026-03-31 04:56:20.054127 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:56:20.054147 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f91d726b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part16', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part14', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part15', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part1', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:56:20.054169 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:56:20.054189 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:56:30.633502 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-3Senp6-or7C-3BHv-rbv8-KuFH-dSdm-wvdICC', 'dm-uuid-CRYPT-LUKS2-4a48fb33b5994c4da815d018d343a3ff-3Senp6-or7C-3BHv-rbv8-KuFH-dSdm-wvdICC'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:56:30.633690 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:30.633711 | orchestrator | 2026-03-31 04:56:30.633724 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-31 04:56:30.633737 | orchestrator | Tuesday 31 March 2026 04:56:20 +0000 (0:00:00.685) 0:21:52.715 ********* 2026-03-31 04:56:30.633748 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:56:30.633761 | orchestrator | 2026-03-31 04:56:30.633773 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-31 04:56:30.633784 | orchestrator | Tuesday 31 March 2026 04:56:20 +0000 (0:00:00.507) 0:21:53.222 ********* 2026-03-31 04:56:30.633796 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:56:30.633807 | orchestrator | 2026-03-31 04:56:30.633819 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-31 04:56:30.633830 | orchestrator | Tuesday 31 March 2026 04:56:20 +0000 (0:00:00.163) 0:21:53.386 ********* 2026-03-31 04:56:30.633841 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:56:30.633852 | orchestrator | 2026-03-31 04:56:30.633863 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-31 04:56:30.633874 | orchestrator | Tuesday 31 March 2026 04:56:21 +0000 (0:00:00.486) 0:21:53.872 ********* 2026-03-31 04:56:30.633909 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:30.633921 | orchestrator | 2026-03-31 04:56:30.633932 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-31 04:56:30.633943 | orchestrator | Tuesday 31 March 2026 04:56:21 +0000 (0:00:00.155) 0:21:54.028 ********* 2026-03-31 04:56:30.633954 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:30.633965 | orchestrator | 2026-03-31 04:56:30.633976 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-31 04:56:30.633987 | orchestrator | Tuesday 31 March 2026 04:56:21 +0000 (0:00:00.237) 0:21:54.265 ********* 2026-03-31 04:56:30.633997 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:30.634008 | orchestrator | 2026-03-31 04:56:30.634093 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-31 04:56:30.634107 | orchestrator | Tuesday 31 March 2026 04:56:21 +0000 (0:00:00.144) 0:21:54.409 ********* 2026-03-31 04:56:30.634120 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-31 04:56:30.634133 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-31 04:56:30.634146 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-31 04:56:30.634158 | orchestrator | 2026-03-31 04:56:30.634170 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-31 04:56:30.634182 | orchestrator | Tuesday 31 March 2026 04:56:22 +0000 (0:00:00.686) 0:21:55.096 ********* 2026-03-31 04:56:30.634195 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-31 04:56:30.634207 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-31 04:56:30.634220 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-31 04:56:30.634233 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:30.634244 | orchestrator | 2026-03-31 04:56:30.634255 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-31 04:56:30.634266 | orchestrator | Tuesday 31 March 2026 04:56:22 +0000 (0:00:00.169) 0:21:55.266 ********* 2026-03-31 04:56:30.634277 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-03-31 04:56:30.634288 | orchestrator | 2026-03-31 04:56:30.634300 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-31 04:56:30.634313 | orchestrator | Tuesday 31 March 2026 04:56:22 +0000 (0:00:00.252) 0:21:55.518 ********* 2026-03-31 04:56:30.634324 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:30.634335 | orchestrator | 2026-03-31 04:56:30.634346 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-31 04:56:30.634357 | orchestrator | Tuesday 31 March 2026 04:56:22 +0000 (0:00:00.149) 0:21:55.667 ********* 2026-03-31 04:56:30.634367 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:30.634378 | orchestrator | 2026-03-31 04:56:30.634389 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-31 04:56:30.634400 | orchestrator | Tuesday 31 March 2026 04:56:23 +0000 (0:00:00.148) 0:21:55.816 ********* 2026-03-31 04:56:30.634411 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:30.634422 | orchestrator | 2026-03-31 04:56:30.634433 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-31 04:56:30.634444 | orchestrator | Tuesday 31 March 2026 04:56:23 +0000 (0:00:00.182) 0:21:55.998 ********* 2026-03-31 04:56:30.634455 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:56:30.634466 | orchestrator | 2026-03-31 04:56:30.634477 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-31 04:56:30.634488 | orchestrator | Tuesday 31 March 2026 04:56:24 +0000 (0:00:01.038) 0:21:57.036 ********* 2026-03-31 04:56:30.634499 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-31 04:56:30.634528 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-31 04:56:30.634540 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-31 04:56:30.634560 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:30.634571 | orchestrator | 2026-03-31 04:56:30.634582 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-31 04:56:30.634618 | orchestrator | Tuesday 31 March 2026 04:56:24 +0000 (0:00:00.424) 0:21:57.460 ********* 2026-03-31 04:56:30.634631 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-31 04:56:30.634642 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-31 04:56:30.634652 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-31 04:56:30.634663 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:30.634674 | orchestrator | 2026-03-31 04:56:30.634685 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-31 04:56:30.634695 | orchestrator | Tuesday 31 March 2026 04:56:25 +0000 (0:00:00.439) 0:21:57.899 ********* 2026-03-31 04:56:30.634706 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-31 04:56:30.634717 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-31 04:56:30.634728 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-31 04:56:30.634739 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:30.634750 | orchestrator | 2026-03-31 04:56:30.634760 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-31 04:56:30.634771 | orchestrator | Tuesday 31 March 2026 04:56:25 +0000 (0:00:00.456) 0:21:58.356 ********* 2026-03-31 04:56:30.634782 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:56:30.634793 | orchestrator | 2026-03-31 04:56:30.634803 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-31 04:56:30.634814 | orchestrator | Tuesday 31 March 2026 04:56:25 +0000 (0:00:00.166) 0:21:58.522 ********* 2026-03-31 04:56:30.634825 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-31 04:56:30.634836 | orchestrator | 2026-03-31 04:56:30.634847 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-31 04:56:30.634858 | orchestrator | Tuesday 31 March 2026 04:56:26 +0000 (0:00:00.356) 0:21:58.879 ********* 2026-03-31 04:56:30.634869 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:56:30.634880 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:56:30.634891 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:56:30.634901 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-31 04:56:30.634912 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-31 04:56:30.634923 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-03-31 04:56:30.634940 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-31 04:56:30.634951 | orchestrator | 2026-03-31 04:56:30.634962 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-31 04:56:30.634973 | orchestrator | Tuesday 31 March 2026 04:56:27 +0000 (0:00:00.865) 0:21:59.744 ********* 2026-03-31 04:56:30.634984 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:56:30.634995 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:56:30.635005 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:56:30.635016 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-31 04:56:30.635027 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-31 04:56:30.635038 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-03-31 04:56:30.635049 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-31 04:56:30.635059 | orchestrator | 2026-03-31 04:56:30.635070 | orchestrator | TASK [Prevent restart from the packaging] ************************************** 2026-03-31 04:56:30.635088 | orchestrator | Tuesday 31 March 2026 04:56:28 +0000 (0:00:01.895) 0:22:01.639 ********* 2026-03-31 04:56:30.635099 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:30.635109 | orchestrator | 2026-03-31 04:56:30.635120 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-31 04:56:30.635131 | orchestrator | Tuesday 31 March 2026 04:56:29 +0000 (0:00:00.122) 0:22:01.762 ********* 2026-03-31 04:56:30.635142 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-03-31 04:56:30.635153 | orchestrator | 2026-03-31 04:56:30.635163 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-31 04:56:30.635174 | orchestrator | Tuesday 31 March 2026 04:56:29 +0000 (0:00:00.201) 0:22:01.963 ********* 2026-03-31 04:56:30.635185 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-03-31 04:56:30.635196 | orchestrator | 2026-03-31 04:56:30.635206 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-31 04:56:30.635217 | orchestrator | Tuesday 31 March 2026 04:56:29 +0000 (0:00:00.693) 0:22:02.657 ********* 2026-03-31 04:56:30.635228 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:30.635239 | orchestrator | 2026-03-31 04:56:30.635250 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-31 04:56:30.635260 | orchestrator | Tuesday 31 March 2026 04:56:30 +0000 (0:00:00.156) 0:22:02.814 ********* 2026-03-31 04:56:30.635271 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:56:30.635282 | orchestrator | 2026-03-31 04:56:30.635293 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-31 04:56:30.635310 | orchestrator | Tuesday 31 March 2026 04:56:30 +0000 (0:00:00.485) 0:22:03.300 ********* 2026-03-31 04:56:42.042179 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:56:42.042348 | orchestrator | 2026-03-31 04:56:42.042366 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-31 04:56:42.042380 | orchestrator | Tuesday 31 March 2026 04:56:31 +0000 (0:00:00.551) 0:22:03.851 ********* 2026-03-31 04:56:42.042392 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:56:42.042403 | orchestrator | 2026-03-31 04:56:42.042415 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-31 04:56:42.042426 | orchestrator | Tuesday 31 March 2026 04:56:31 +0000 (0:00:00.540) 0:22:04.392 ********* 2026-03-31 04:56:42.042438 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:42.042450 | orchestrator | 2026-03-31 04:56:42.042461 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-31 04:56:42.042486 | orchestrator | Tuesday 31 March 2026 04:56:31 +0000 (0:00:00.143) 0:22:04.536 ********* 2026-03-31 04:56:42.042499 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:42.042511 | orchestrator | 2026-03-31 04:56:42.042522 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-31 04:56:42.042534 | orchestrator | Tuesday 31 March 2026 04:56:31 +0000 (0:00:00.135) 0:22:04.671 ********* 2026-03-31 04:56:42.042545 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:42.042556 | orchestrator | 2026-03-31 04:56:42.042567 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-31 04:56:42.042578 | orchestrator | Tuesday 31 March 2026 04:56:32 +0000 (0:00:00.136) 0:22:04.808 ********* 2026-03-31 04:56:42.042589 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:56:42.042601 | orchestrator | 2026-03-31 04:56:42.042647 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-31 04:56:42.042669 | orchestrator | Tuesday 31 March 2026 04:56:32 +0000 (0:00:00.522) 0:22:05.330 ********* 2026-03-31 04:56:42.042690 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:56:42.042710 | orchestrator | 2026-03-31 04:56:42.042727 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-31 04:56:42.042740 | orchestrator | Tuesday 31 March 2026 04:56:33 +0000 (0:00:00.535) 0:22:05.866 ********* 2026-03-31 04:56:42.042754 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:42.042793 | orchestrator | 2026-03-31 04:56:42.042805 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-31 04:56:42.042816 | orchestrator | Tuesday 31 March 2026 04:56:33 +0000 (0:00:00.126) 0:22:05.993 ********* 2026-03-31 04:56:42.042827 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:42.042838 | orchestrator | 2026-03-31 04:56:42.042849 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-31 04:56:42.042860 | orchestrator | Tuesday 31 March 2026 04:56:33 +0000 (0:00:00.131) 0:22:06.125 ********* 2026-03-31 04:56:42.042871 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:56:42.042882 | orchestrator | 2026-03-31 04:56:42.042893 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-31 04:56:42.042904 | orchestrator | Tuesday 31 March 2026 04:56:33 +0000 (0:00:00.163) 0:22:06.289 ********* 2026-03-31 04:56:42.042930 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:56:42.042941 | orchestrator | 2026-03-31 04:56:42.042952 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-31 04:56:42.042964 | orchestrator | Tuesday 31 March 2026 04:56:34 +0000 (0:00:00.487) 0:22:06.776 ********* 2026-03-31 04:56:42.042975 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:56:42.042986 | orchestrator | 2026-03-31 04:56:42.042997 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-31 04:56:42.043008 | orchestrator | Tuesday 31 March 2026 04:56:34 +0000 (0:00:00.163) 0:22:06.940 ********* 2026-03-31 04:56:42.043019 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:42.043030 | orchestrator | 2026-03-31 04:56:42.043041 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-31 04:56:42.043053 | orchestrator | Tuesday 31 March 2026 04:56:34 +0000 (0:00:00.160) 0:22:07.100 ********* 2026-03-31 04:56:42.043063 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:42.043075 | orchestrator | 2026-03-31 04:56:42.043086 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-31 04:56:42.043097 | orchestrator | Tuesday 31 March 2026 04:56:34 +0000 (0:00:00.131) 0:22:07.232 ********* 2026-03-31 04:56:42.043107 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:42.043119 | orchestrator | 2026-03-31 04:56:42.043130 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-31 04:56:42.043141 | orchestrator | Tuesday 31 March 2026 04:56:34 +0000 (0:00:00.151) 0:22:07.384 ********* 2026-03-31 04:56:42.043152 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:56:42.043163 | orchestrator | 2026-03-31 04:56:42.043174 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-31 04:56:42.043185 | orchestrator | Tuesday 31 March 2026 04:56:34 +0000 (0:00:00.175) 0:22:07.559 ********* 2026-03-31 04:56:42.043196 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:56:42.043207 | orchestrator | 2026-03-31 04:56:42.043218 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-31 04:56:42.043229 | orchestrator | Tuesday 31 March 2026 04:56:35 +0000 (0:00:00.238) 0:22:07.797 ********* 2026-03-31 04:56:42.043240 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:42.043251 | orchestrator | 2026-03-31 04:56:42.043262 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-31 04:56:42.043273 | orchestrator | Tuesday 31 March 2026 04:56:35 +0000 (0:00:00.137) 0:22:07.935 ********* 2026-03-31 04:56:42.043284 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:42.043295 | orchestrator | 2026-03-31 04:56:42.043306 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-31 04:56:42.043317 | orchestrator | Tuesday 31 March 2026 04:56:35 +0000 (0:00:00.138) 0:22:08.073 ********* 2026-03-31 04:56:42.043328 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:42.043339 | orchestrator | 2026-03-31 04:56:42.043350 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-31 04:56:42.043361 | orchestrator | Tuesday 31 March 2026 04:56:35 +0000 (0:00:00.147) 0:22:08.221 ********* 2026-03-31 04:56:42.043372 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:42.043391 | orchestrator | 2026-03-31 04:56:42.043403 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-31 04:56:42.043431 | orchestrator | Tuesday 31 March 2026 04:56:35 +0000 (0:00:00.136) 0:22:08.358 ********* 2026-03-31 04:56:42.043443 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:42.043454 | orchestrator | 2026-03-31 04:56:42.043465 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-31 04:56:42.043476 | orchestrator | Tuesday 31 March 2026 04:56:35 +0000 (0:00:00.152) 0:22:08.511 ********* 2026-03-31 04:56:42.043487 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:42.043498 | orchestrator | 2026-03-31 04:56:42.043509 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-31 04:56:42.043523 | orchestrator | Tuesday 31 March 2026 04:56:36 +0000 (0:00:00.440) 0:22:08.951 ********* 2026-03-31 04:56:42.043543 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:42.043561 | orchestrator | 2026-03-31 04:56:42.043578 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-31 04:56:42.043598 | orchestrator | Tuesday 31 March 2026 04:56:36 +0000 (0:00:00.130) 0:22:09.082 ********* 2026-03-31 04:56:42.043666 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:42.043687 | orchestrator | 2026-03-31 04:56:42.043706 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-31 04:56:42.043724 | orchestrator | Tuesday 31 March 2026 04:56:36 +0000 (0:00:00.132) 0:22:09.215 ********* 2026-03-31 04:56:42.043744 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:42.043756 | orchestrator | 2026-03-31 04:56:42.043767 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-31 04:56:42.043778 | orchestrator | Tuesday 31 March 2026 04:56:36 +0000 (0:00:00.121) 0:22:09.336 ********* 2026-03-31 04:56:42.043789 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:42.043799 | orchestrator | 2026-03-31 04:56:42.043810 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-31 04:56:42.043821 | orchestrator | Tuesday 31 March 2026 04:56:36 +0000 (0:00:00.131) 0:22:09.468 ********* 2026-03-31 04:56:42.043832 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:42.043843 | orchestrator | 2026-03-31 04:56:42.043854 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-31 04:56:42.043864 | orchestrator | Tuesday 31 March 2026 04:56:36 +0000 (0:00:00.137) 0:22:09.605 ********* 2026-03-31 04:56:42.043875 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:42.043886 | orchestrator | 2026-03-31 04:56:42.043897 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-31 04:56:42.043908 | orchestrator | Tuesday 31 March 2026 04:56:37 +0000 (0:00:00.196) 0:22:09.802 ********* 2026-03-31 04:56:42.043919 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:56:42.043930 | orchestrator | 2026-03-31 04:56:42.043941 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-31 04:56:42.043952 | orchestrator | Tuesday 31 March 2026 04:56:38 +0000 (0:00:00.965) 0:22:10.767 ********* 2026-03-31 04:56:42.043963 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:56:42.043974 | orchestrator | 2026-03-31 04:56:42.043985 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-31 04:56:42.044004 | orchestrator | Tuesday 31 March 2026 04:56:39 +0000 (0:00:01.216) 0:22:11.984 ********* 2026-03-31 04:56:42.044015 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-03-31 04:56:42.044027 | orchestrator | 2026-03-31 04:56:42.044038 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-31 04:56:42.044049 | orchestrator | Tuesday 31 March 2026 04:56:39 +0000 (0:00:00.230) 0:22:12.214 ********* 2026-03-31 04:56:42.044060 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:42.044071 | orchestrator | 2026-03-31 04:56:42.044082 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-31 04:56:42.044093 | orchestrator | Tuesday 31 March 2026 04:56:39 +0000 (0:00:00.139) 0:22:12.354 ********* 2026-03-31 04:56:42.044114 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:42.044125 | orchestrator | 2026-03-31 04:56:42.044136 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-31 04:56:42.044147 | orchestrator | Tuesday 31 March 2026 04:56:39 +0000 (0:00:00.140) 0:22:12.495 ********* 2026-03-31 04:56:42.044158 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-31 04:56:42.044169 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-31 04:56:42.044180 | orchestrator | 2026-03-31 04:56:42.044191 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-31 04:56:42.044202 | orchestrator | Tuesday 31 March 2026 04:56:40 +0000 (0:00:01.086) 0:22:13.581 ********* 2026-03-31 04:56:42.044213 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:56:42.044224 | orchestrator | 2026-03-31 04:56:42.044235 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-31 04:56:42.044245 | orchestrator | Tuesday 31 March 2026 04:56:41 +0000 (0:00:00.489) 0:22:14.071 ********* 2026-03-31 04:56:42.044256 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:42.044267 | orchestrator | 2026-03-31 04:56:42.044284 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-31 04:56:42.044303 | orchestrator | Tuesday 31 March 2026 04:56:41 +0000 (0:00:00.153) 0:22:14.224 ********* 2026-03-31 04:56:42.044319 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:42.044338 | orchestrator | 2026-03-31 04:56:42.044356 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-31 04:56:42.044405 | orchestrator | Tuesday 31 March 2026 04:56:41 +0000 (0:00:00.161) 0:22:14.385 ********* 2026-03-31 04:56:42.044425 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:42.044458 | orchestrator | 2026-03-31 04:56:42.044479 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-31 04:56:42.044500 | orchestrator | Tuesday 31 March 2026 04:56:41 +0000 (0:00:00.127) 0:22:14.512 ********* 2026-03-31 04:56:42.044520 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-03-31 04:56:42.044541 | orchestrator | 2026-03-31 04:56:42.044561 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-31 04:56:42.044597 | orchestrator | Tuesday 31 March 2026 04:56:42 +0000 (0:00:00.197) 0:22:14.710 ********* 2026-03-31 04:56:56.589331 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:56:56.589442 | orchestrator | 2026-03-31 04:56:56.589457 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-31 04:56:56.589468 | orchestrator | Tuesday 31 March 2026 04:56:42 +0000 (0:00:00.683) 0:22:15.393 ********* 2026-03-31 04:56:56.589479 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-31 04:56:56.589488 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-31 04:56:56.589497 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-31 04:56:56.589506 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:56.589516 | orchestrator | 2026-03-31 04:56:56.589525 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-31 04:56:56.589534 | orchestrator | Tuesday 31 March 2026 04:56:42 +0000 (0:00:00.149) 0:22:15.543 ********* 2026-03-31 04:56:56.589545 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:56.589561 | orchestrator | 2026-03-31 04:56:56.589576 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-31 04:56:56.589586 | orchestrator | Tuesday 31 March 2026 04:56:42 +0000 (0:00:00.129) 0:22:15.673 ********* 2026-03-31 04:56:56.589595 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:56.589609 | orchestrator | 2026-03-31 04:56:56.589685 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-31 04:56:56.589697 | orchestrator | Tuesday 31 March 2026 04:56:43 +0000 (0:00:00.172) 0:22:15.846 ********* 2026-03-31 04:56:56.589776 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:56.589787 | orchestrator | 2026-03-31 04:56:56.589796 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-31 04:56:56.589805 | orchestrator | Tuesday 31 March 2026 04:56:43 +0000 (0:00:00.158) 0:22:16.004 ********* 2026-03-31 04:56:56.589814 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:56.589823 | orchestrator | 2026-03-31 04:56:56.589832 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-31 04:56:56.589840 | orchestrator | Tuesday 31 March 2026 04:56:43 +0000 (0:00:00.163) 0:22:16.167 ********* 2026-03-31 04:56:56.589849 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:56.589858 | orchestrator | 2026-03-31 04:56:56.589867 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-31 04:56:56.589878 | orchestrator | Tuesday 31 March 2026 04:56:43 +0000 (0:00:00.456) 0:22:16.624 ********* 2026-03-31 04:56:56.589888 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:56:56.589899 | orchestrator | 2026-03-31 04:56:56.589908 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-31 04:56:56.589918 | orchestrator | Tuesday 31 March 2026 04:56:45 +0000 (0:00:01.492) 0:22:18.117 ********* 2026-03-31 04:56:56.589928 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:56:56.589939 | orchestrator | 2026-03-31 04:56:56.589962 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-31 04:56:56.589972 | orchestrator | Tuesday 31 March 2026 04:56:45 +0000 (0:00:00.143) 0:22:18.261 ********* 2026-03-31 04:56:56.589982 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-03-31 04:56:56.589992 | orchestrator | 2026-03-31 04:56:56.590002 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-31 04:56:56.590060 | orchestrator | Tuesday 31 March 2026 04:56:45 +0000 (0:00:00.232) 0:22:18.493 ********* 2026-03-31 04:56:56.590071 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:56.590079 | orchestrator | 2026-03-31 04:56:56.590088 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-31 04:56:56.590097 | orchestrator | Tuesday 31 March 2026 04:56:45 +0000 (0:00:00.151) 0:22:18.644 ********* 2026-03-31 04:56:56.590106 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:56.590114 | orchestrator | 2026-03-31 04:56:56.590123 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-31 04:56:56.590132 | orchestrator | Tuesday 31 March 2026 04:56:46 +0000 (0:00:00.150) 0:22:18.794 ********* 2026-03-31 04:56:56.590141 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:56.590150 | orchestrator | 2026-03-31 04:56:56.590159 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-31 04:56:56.590167 | orchestrator | Tuesday 31 March 2026 04:56:46 +0000 (0:00:00.161) 0:22:18.956 ********* 2026-03-31 04:56:56.590176 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:56.590185 | orchestrator | 2026-03-31 04:56:56.590193 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-31 04:56:56.590202 | orchestrator | Tuesday 31 March 2026 04:56:46 +0000 (0:00:00.155) 0:22:19.112 ********* 2026-03-31 04:56:56.590211 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:56.590220 | orchestrator | 2026-03-31 04:56:56.590228 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-31 04:56:56.590237 | orchestrator | Tuesday 31 March 2026 04:56:46 +0000 (0:00:00.175) 0:22:19.287 ********* 2026-03-31 04:56:56.590246 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:56.590255 | orchestrator | 2026-03-31 04:56:56.590263 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-31 04:56:56.590272 | orchestrator | Tuesday 31 March 2026 04:56:46 +0000 (0:00:00.152) 0:22:19.439 ********* 2026-03-31 04:56:56.590281 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:56.590289 | orchestrator | 2026-03-31 04:56:56.590298 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-31 04:56:56.590307 | orchestrator | Tuesday 31 March 2026 04:56:46 +0000 (0:00:00.133) 0:22:19.573 ********* 2026-03-31 04:56:56.590323 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:56.590332 | orchestrator | 2026-03-31 04:56:56.590340 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-31 04:56:56.590349 | orchestrator | Tuesday 31 March 2026 04:56:47 +0000 (0:00:00.154) 0:22:19.727 ********* 2026-03-31 04:56:56.590358 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:56:56.590367 | orchestrator | 2026-03-31 04:56:56.590375 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-31 04:56:56.590402 | orchestrator | Tuesday 31 March 2026 04:56:47 +0000 (0:00:00.554) 0:22:20.282 ********* 2026-03-31 04:56:56.590412 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-03-31 04:56:56.590422 | orchestrator | 2026-03-31 04:56:56.590431 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-31 04:56:56.590440 | orchestrator | Tuesday 31 March 2026 04:56:47 +0000 (0:00:00.195) 0:22:20.477 ********* 2026-03-31 04:56:56.590449 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-03-31 04:56:56.590458 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-31 04:56:56.590466 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-31 04:56:56.590475 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-31 04:56:56.590484 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-31 04:56:56.590492 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-31 04:56:56.590501 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-31 04:56:56.590509 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-31 04:56:56.590524 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-31 04:56:56.590540 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-31 04:56:56.590555 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-31 04:56:56.590570 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-31 04:56:56.590586 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-31 04:56:56.590597 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-31 04:56:56.590606 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-03-31 04:56:56.590615 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-03-31 04:56:56.590646 | orchestrator | 2026-03-31 04:56:56.590655 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-31 04:56:56.590664 | orchestrator | Tuesday 31 March 2026 04:56:53 +0000 (0:00:05.460) 0:22:25.938 ********* 2026-03-31 04:56:56.590673 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-03-31 04:56:56.590682 | orchestrator | 2026-03-31 04:56:56.590691 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-31 04:56:56.590699 | orchestrator | Tuesday 31 March 2026 04:56:53 +0000 (0:00:00.212) 0:22:26.150 ********* 2026-03-31 04:56:56.590708 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-31 04:56:56.590718 | orchestrator | 2026-03-31 04:56:56.590727 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-31 04:56:56.590742 | orchestrator | Tuesday 31 March 2026 04:56:53 +0000 (0:00:00.530) 0:22:26.681 ********* 2026-03-31 04:56:56.590751 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-31 04:56:56.590760 | orchestrator | 2026-03-31 04:56:56.590769 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-31 04:56:56.590777 | orchestrator | Tuesday 31 March 2026 04:56:54 +0000 (0:00:00.984) 0:22:27.665 ********* 2026-03-31 04:56:56.590786 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:56.590801 | orchestrator | 2026-03-31 04:56:56.590810 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-31 04:56:56.590819 | orchestrator | Tuesday 31 March 2026 04:56:55 +0000 (0:00:00.127) 0:22:27.793 ********* 2026-03-31 04:56:56.590828 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:56.590837 | orchestrator | 2026-03-31 04:56:56.590845 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-31 04:56:56.590854 | orchestrator | Tuesday 31 March 2026 04:56:55 +0000 (0:00:00.150) 0:22:27.944 ********* 2026-03-31 04:56:56.590863 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:56.590871 | orchestrator | 2026-03-31 04:56:56.590880 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-31 04:56:56.590889 | orchestrator | Tuesday 31 March 2026 04:56:55 +0000 (0:00:00.141) 0:22:28.085 ********* 2026-03-31 04:56:56.590897 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:56.590906 | orchestrator | 2026-03-31 04:56:56.590915 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-31 04:56:56.590923 | orchestrator | Tuesday 31 March 2026 04:56:55 +0000 (0:00:00.125) 0:22:28.211 ********* 2026-03-31 04:56:56.590932 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:56.590941 | orchestrator | 2026-03-31 04:56:56.590950 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-31 04:56:56.590958 | orchestrator | Tuesday 31 March 2026 04:56:55 +0000 (0:00:00.433) 0:22:28.645 ********* 2026-03-31 04:56:56.590967 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:56.590976 | orchestrator | 2026-03-31 04:56:56.590985 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-31 04:56:56.590993 | orchestrator | Tuesday 31 March 2026 04:56:56 +0000 (0:00:00.158) 0:22:28.804 ********* 2026-03-31 04:56:56.591002 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:56.591011 | orchestrator | 2026-03-31 04:56:56.591020 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-31 04:56:56.591028 | orchestrator | Tuesday 31 March 2026 04:56:56 +0000 (0:00:00.139) 0:22:28.944 ********* 2026-03-31 04:56:56.591037 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:56.591046 | orchestrator | 2026-03-31 04:56:56.591055 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-31 04:56:56.591063 | orchestrator | Tuesday 31 March 2026 04:56:56 +0000 (0:00:00.160) 0:22:29.104 ********* 2026-03-31 04:56:56.591072 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:56:56.591081 | orchestrator | 2026-03-31 04:56:56.591097 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-31 04:57:20.178775 | orchestrator | Tuesday 31 March 2026 04:56:56 +0000 (0:00:00.152) 0:22:29.256 ********* 2026-03-31 04:57:20.178894 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:57:20.178912 | orchestrator | 2026-03-31 04:57:20.178925 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-31 04:57:20.178937 | orchestrator | Tuesday 31 March 2026 04:56:56 +0000 (0:00:00.141) 0:22:29.398 ********* 2026-03-31 04:57:20.178948 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:57:20.178960 | orchestrator | 2026-03-31 04:57:20.178971 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-31 04:57:20.178982 | orchestrator | Tuesday 31 March 2026 04:56:56 +0000 (0:00:00.162) 0:22:29.561 ********* 2026-03-31 04:57:20.178993 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-03-31 04:57:20.179004 | orchestrator | 2026-03-31 04:57:20.179015 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-31 04:57:20.179026 | orchestrator | Tuesday 31 March 2026 04:57:00 +0000 (0:00:03.524) 0:22:33.085 ********* 2026-03-31 04:57:20.179038 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-31 04:57:20.179050 | orchestrator | 2026-03-31 04:57:20.179061 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-31 04:57:20.179097 | orchestrator | Tuesday 31 March 2026 04:57:00 +0000 (0:00:00.201) 0:22:33.287 ********* 2026-03-31 04:57:20.179111 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-03-31 04:57:20.179126 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-03-31 04:57:20.179138 | orchestrator | 2026-03-31 04:57:20.179149 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-31 04:57:20.179160 | orchestrator | Tuesday 31 March 2026 04:57:04 +0000 (0:00:03.837) 0:22:37.124 ********* 2026-03-31 04:57:20.179171 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:57:20.179180 | orchestrator | 2026-03-31 04:57:20.179200 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-31 04:57:20.179209 | orchestrator | Tuesday 31 March 2026 04:57:04 +0000 (0:00:00.149) 0:22:37.274 ********* 2026-03-31 04:57:20.179217 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:57:20.179225 | orchestrator | 2026-03-31 04:57:20.179233 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-31 04:57:20.179241 | orchestrator | Tuesday 31 March 2026 04:57:04 +0000 (0:00:00.150) 0:22:37.424 ********* 2026-03-31 04:57:20.179250 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:57:20.179259 | orchestrator | 2026-03-31 04:57:20.179267 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-31 04:57:20.179276 | orchestrator | Tuesday 31 March 2026 04:57:04 +0000 (0:00:00.164) 0:22:37.589 ********* 2026-03-31 04:57:20.179285 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:57:20.179294 | orchestrator | 2026-03-31 04:57:20.179303 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-31 04:57:20.179313 | orchestrator | Tuesday 31 March 2026 04:57:05 +0000 (0:00:00.603) 0:22:38.192 ********* 2026-03-31 04:57:20.179322 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:57:20.179331 | orchestrator | 2026-03-31 04:57:20.179340 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-31 04:57:20.179349 | orchestrator | Tuesday 31 March 2026 04:57:05 +0000 (0:00:00.166) 0:22:38.359 ********* 2026-03-31 04:57:20.179358 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:57:20.179367 | orchestrator | 2026-03-31 04:57:20.179376 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-31 04:57:20.179385 | orchestrator | Tuesday 31 March 2026 04:57:05 +0000 (0:00:00.283) 0:22:38.642 ********* 2026-03-31 04:57:20.179394 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-31 04:57:20.179403 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-31 04:57:20.179412 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-31 04:57:20.179421 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:57:20.179430 | orchestrator | 2026-03-31 04:57:20.179439 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-31 04:57:20.179448 | orchestrator | Tuesday 31 March 2026 04:57:06 +0000 (0:00:00.408) 0:22:39.050 ********* 2026-03-31 04:57:20.179457 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-31 04:57:20.179466 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-31 04:57:20.179475 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-31 04:57:20.179484 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:57:20.179493 | orchestrator | 2026-03-31 04:57:20.179510 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-31 04:57:20.179519 | orchestrator | Tuesday 31 March 2026 04:57:06 +0000 (0:00:00.419) 0:22:39.470 ********* 2026-03-31 04:57:20.179528 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-31 04:57:20.179537 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-31 04:57:20.179546 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-31 04:57:20.179568 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:57:20.179578 | orchestrator | 2026-03-31 04:57:20.179587 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-31 04:57:20.179597 | orchestrator | Tuesday 31 March 2026 04:57:07 +0000 (0:00:00.416) 0:22:39.887 ********* 2026-03-31 04:57:20.179606 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:57:20.179615 | orchestrator | 2026-03-31 04:57:20.179623 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-31 04:57:20.179631 | orchestrator | Tuesday 31 March 2026 04:57:07 +0000 (0:00:00.163) 0:22:40.051 ********* 2026-03-31 04:57:20.179639 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-31 04:57:20.179709 | orchestrator | 2026-03-31 04:57:20.179720 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-31 04:57:20.179728 | orchestrator | Tuesday 31 March 2026 04:57:07 +0000 (0:00:00.440) 0:22:40.491 ********* 2026-03-31 04:57:20.179736 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:57:20.179744 | orchestrator | 2026-03-31 04:57:20.179751 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-31 04:57:20.179759 | orchestrator | Tuesday 31 March 2026 04:57:08 +0000 (0:00:00.842) 0:22:41.333 ********* 2026-03-31 04:57:20.179767 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:57:20.179775 | orchestrator | 2026-03-31 04:57:20.179783 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-31 04:57:20.179791 | orchestrator | Tuesday 31 March 2026 04:57:08 +0000 (0:00:00.141) 0:22:41.475 ********* 2026-03-31 04:57:20.179799 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-5 2026-03-31 04:57:20.179807 | orchestrator | 2026-03-31 04:57:20.179815 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-31 04:57:20.179823 | orchestrator | Tuesday 31 March 2026 04:57:08 +0000 (0:00:00.195) 0:22:41.670 ********* 2026-03-31 04:57:20.179831 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-31 04:57:20.179838 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-03-31 04:57:20.179846 | orchestrator | 2026-03-31 04:57:20.179854 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-31 04:57:20.179862 | orchestrator | Tuesday 31 March 2026 04:57:10 +0000 (0:00:01.149) 0:22:42.820 ********* 2026-03-31 04:57:20.179870 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 04:57:20.179878 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-31 04:57:20.179886 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-31 04:57:20.179894 | orchestrator | 2026-03-31 04:57:20.179902 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-31 04:57:20.179914 | orchestrator | Tuesday 31 March 2026 04:57:12 +0000 (0:00:02.239) 0:22:45.060 ********* 2026-03-31 04:57:20.179923 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-03-31 04:57:20.179931 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-31 04:57:20.179939 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:57:20.179947 | orchestrator | 2026-03-31 04:57:20.179955 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-31 04:57:20.179962 | orchestrator | Tuesday 31 March 2026 04:57:13 +0000 (0:00:00.965) 0:22:46.025 ********* 2026-03-31 04:57:20.179970 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:57:20.179978 | orchestrator | 2026-03-31 04:57:20.179986 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-31 04:57:20.180001 | orchestrator | Tuesday 31 March 2026 04:57:13 +0000 (0:00:00.497) 0:22:46.523 ********* 2026-03-31 04:57:20.180009 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:57:20.180017 | orchestrator | 2026-03-31 04:57:20.180025 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-31 04:57:20.180033 | orchestrator | Tuesday 31 March 2026 04:57:13 +0000 (0:00:00.153) 0:22:46.676 ********* 2026-03-31 04:57:20.180041 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-5 2026-03-31 04:57:20.180049 | orchestrator | 2026-03-31 04:57:20.180057 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-31 04:57:20.180065 | orchestrator | Tuesday 31 March 2026 04:57:14 +0000 (0:00:00.213) 0:22:46.890 ********* 2026-03-31 04:57:20.180073 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-5 2026-03-31 04:57:20.180081 | orchestrator | 2026-03-31 04:57:20.180089 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-31 04:57:20.180097 | orchestrator | Tuesday 31 March 2026 04:57:14 +0000 (0:00:00.219) 0:22:47.110 ********* 2026-03-31 04:57:20.180105 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:57:20.180112 | orchestrator | 2026-03-31 04:57:20.180121 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-31 04:57:20.180128 | orchestrator | Tuesday 31 March 2026 04:57:15 +0000 (0:00:01.044) 0:22:48.154 ********* 2026-03-31 04:57:20.180136 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:57:20.180144 | orchestrator | 2026-03-31 04:57:20.180153 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-31 04:57:20.180160 | orchestrator | Tuesday 31 March 2026 04:57:16 +0000 (0:00:00.910) 0:22:49.064 ********* 2026-03-31 04:57:20.180168 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:57:20.180176 | orchestrator | 2026-03-31 04:57:20.180184 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-31 04:57:20.180192 | orchestrator | Tuesday 31 March 2026 04:57:17 +0000 (0:00:01.190) 0:22:50.255 ********* 2026-03-31 04:57:20.180200 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:57:20.180208 | orchestrator | 2026-03-31 04:57:20.180216 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-31 04:57:20.180224 | orchestrator | Tuesday 31 March 2026 04:57:18 +0000 (0:00:01.245) 0:22:51.501 ********* 2026-03-31 04:57:20.180232 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:57:20.180240 | orchestrator | 2026-03-31 04:57:20.180248 | orchestrator | TASK [Restart ceph mds] ******************************************************** 2026-03-31 04:57:20.180256 | orchestrator | Tuesday 31 March 2026 04:57:20 +0000 (0:00:01.213) 0:22:52.714 ********* 2026-03-31 04:57:20.180271 | orchestrator | skipping: [testbed-node-5] 2026-03-31 04:57:39.294983 | orchestrator | 2026-03-31 04:57:39.295101 | orchestrator | TASK [Restart active mds] ****************************************************** 2026-03-31 04:57:39.295118 | orchestrator | Tuesday 31 March 2026 04:57:20 +0000 (0:00:00.134) 0:22:52.849 ********* 2026-03-31 04:57:39.295130 | orchestrator | ok: [testbed-node-5] 2026-03-31 04:57:39.295143 | orchestrator | 2026-03-31 04:57:39.295154 | orchestrator | PLAY [Upgrade standbys ceph mdss cluster] ************************************** 2026-03-31 04:57:39.295166 | orchestrator | 2026-03-31 04:57:39.295177 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-31 04:57:39.295189 | orchestrator | Tuesday 31 March 2026 04:57:30 +0000 (0:00:10.685) 0:23:03.534 ********* 2026-03-31 04:57:39.295200 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4, testbed-node-3 2026-03-31 04:57:39.295212 | orchestrator | 2026-03-31 04:57:39.295223 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-31 04:57:39.295235 | orchestrator | Tuesday 31 March 2026 04:57:31 +0000 (0:00:00.431) 0:23:03.966 ********* 2026-03-31 04:57:39.295247 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:57:39.295258 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:57:39.295270 | orchestrator | 2026-03-31 04:57:39.295281 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-31 04:57:39.295317 | orchestrator | Tuesday 31 March 2026 04:57:31 +0000 (0:00:00.553) 0:23:04.519 ********* 2026-03-31 04:57:39.295329 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:57:39.295340 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:57:39.295351 | orchestrator | 2026-03-31 04:57:39.295363 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-31 04:57:39.295374 | orchestrator | Tuesday 31 March 2026 04:57:32 +0000 (0:00:00.591) 0:23:05.110 ********* 2026-03-31 04:57:39.295385 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:57:39.295396 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:57:39.295407 | orchestrator | 2026-03-31 04:57:39.295418 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-31 04:57:39.295429 | orchestrator | Tuesday 31 March 2026 04:57:33 +0000 (0:00:00.591) 0:23:05.702 ********* 2026-03-31 04:57:39.295441 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:57:39.295452 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:57:39.295463 | orchestrator | 2026-03-31 04:57:39.295474 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-31 04:57:39.295485 | orchestrator | Tuesday 31 March 2026 04:57:33 +0000 (0:00:00.245) 0:23:05.947 ********* 2026-03-31 04:57:39.295497 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:57:39.295508 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:57:39.295545 | orchestrator | 2026-03-31 04:57:39.295570 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-31 04:57:39.295583 | orchestrator | Tuesday 31 March 2026 04:57:33 +0000 (0:00:00.255) 0:23:06.202 ********* 2026-03-31 04:57:39.295596 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:57:39.295624 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:57:39.295637 | orchestrator | 2026-03-31 04:57:39.295651 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-31 04:57:39.295663 | orchestrator | Tuesday 31 March 2026 04:57:33 +0000 (0:00:00.240) 0:23:06.443 ********* 2026-03-31 04:57:39.295697 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:57:39.295710 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:57:39.295722 | orchestrator | 2026-03-31 04:57:39.295735 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-31 04:57:39.295747 | orchestrator | Tuesday 31 March 2026 04:57:33 +0000 (0:00:00.214) 0:23:06.658 ********* 2026-03-31 04:57:39.295759 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:57:39.295772 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:57:39.295785 | orchestrator | 2026-03-31 04:57:39.295798 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-31 04:57:39.295810 | orchestrator | Tuesday 31 March 2026 04:57:34 +0000 (0:00:00.256) 0:23:06.914 ********* 2026-03-31 04:57:39.295823 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:57:39.295836 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:57:39.295849 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:57:39.295861 | orchestrator | 2026-03-31 04:57:39.295873 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-31 04:57:39.295884 | orchestrator | Tuesday 31 March 2026 04:57:35 +0000 (0:00:01.334) 0:23:08.249 ********* 2026-03-31 04:57:39.295895 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:57:39.295906 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:57:39.295917 | orchestrator | 2026-03-31 04:57:39.295928 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-31 04:57:39.295939 | orchestrator | Tuesday 31 March 2026 04:57:35 +0000 (0:00:00.351) 0:23:08.600 ********* 2026-03-31 04:57:39.295950 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:57:39.295962 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:57:39.295973 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:57:39.295992 | orchestrator | 2026-03-31 04:57:39.296003 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-31 04:57:39.296014 | orchestrator | Tuesday 31 March 2026 04:57:37 +0000 (0:00:01.813) 0:23:10.414 ********* 2026-03-31 04:57:39.296025 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-31 04:57:39.296037 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-31 04:57:39.296048 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-31 04:57:39.296059 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:57:39.296070 | orchestrator | 2026-03-31 04:57:39.296081 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-31 04:57:39.296092 | orchestrator | Tuesday 31 March 2026 04:57:38 +0000 (0:00:00.482) 0:23:10.896 ********* 2026-03-31 04:57:39.296121 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-31 04:57:39.296137 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-31 04:57:39.296148 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-31 04:57:39.296160 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:57:39.296170 | orchestrator | 2026-03-31 04:57:39.296181 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-31 04:57:39.296192 | orchestrator | Tuesday 31 March 2026 04:57:38 +0000 (0:00:00.670) 0:23:11.567 ********* 2026-03-31 04:57:39.296205 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 04:57:39.296220 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 04:57:39.296237 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 04:57:39.296248 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:57:39.296260 | orchestrator | 2026-03-31 04:57:39.296270 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-31 04:57:39.296281 | orchestrator | Tuesday 31 March 2026 04:57:39 +0000 (0:00:00.174) 0:23:11.741 ********* 2026-03-31 04:57:39.296294 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '2a470704af4f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-31 04:57:36.446071', 'end': '2026-03-31 04:57:36.496007', 'delta': '0:00:00.049936', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2a470704af4f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-31 04:57:39.296316 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '72281537ffe8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-31 04:57:36.969879', 'end': '2026-03-31 04:57:37.036284', 'delta': '0:00:00.066405', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['72281537ffe8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-31 04:57:39.296338 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '4f3969f3506a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-31 04:57:37.544405', 'end': '2026-03-31 04:57:37.588016', 'delta': '0:00:00.043611', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['4f3969f3506a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-31 04:57:44.910403 | orchestrator | 2026-03-31 04:57:44.910532 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-31 04:57:44.910560 | orchestrator | Tuesday 31 March 2026 04:57:39 +0000 (0:00:00.223) 0:23:11.965 ********* 2026-03-31 04:57:44.910580 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:57:44.910601 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:57:44.910619 | orchestrator | 2026-03-31 04:57:44.910640 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-31 04:57:44.910659 | orchestrator | Tuesday 31 March 2026 04:57:39 +0000 (0:00:00.384) 0:23:12.350 ********* 2026-03-31 04:57:44.910720 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:57:44.910735 | orchestrator | 2026-03-31 04:57:44.910746 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-31 04:57:44.910757 | orchestrator | Tuesday 31 March 2026 04:57:39 +0000 (0:00:00.258) 0:23:12.608 ********* 2026-03-31 04:57:44.910769 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:57:44.910780 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:57:44.910791 | orchestrator | 2026-03-31 04:57:44.910801 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-31 04:57:44.910811 | orchestrator | Tuesday 31 March 2026 04:57:40 +0000 (0:00:00.258) 0:23:12.867 ********* 2026-03-31 04:57:44.910821 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-31 04:57:44.910831 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-31 04:57:44.910841 | orchestrator | 2026-03-31 04:57:44.910851 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-31 04:57:44.910861 | orchestrator | Tuesday 31 March 2026 04:57:41 +0000 (0:00:01.482) 0:23:14.349 ********* 2026-03-31 04:57:44.910870 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:57:44.910880 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:57:44.910890 | orchestrator | 2026-03-31 04:57:44.910899 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-31 04:57:44.910909 | orchestrator | Tuesday 31 March 2026 04:57:41 +0000 (0:00:00.246) 0:23:14.596 ********* 2026-03-31 04:57:44.910939 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:57:44.910976 | orchestrator | 2026-03-31 04:57:44.910986 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-31 04:57:44.910996 | orchestrator | Tuesday 31 March 2026 04:57:42 +0000 (0:00:00.121) 0:23:14.717 ********* 2026-03-31 04:57:44.911006 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:57:44.911015 | orchestrator | 2026-03-31 04:57:44.911025 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-31 04:57:44.911035 | orchestrator | Tuesday 31 March 2026 04:57:42 +0000 (0:00:00.252) 0:23:14.970 ********* 2026-03-31 04:57:44.911045 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:57:44.911055 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:57:44.911064 | orchestrator | 2026-03-31 04:57:44.911074 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-31 04:57:44.911084 | orchestrator | Tuesday 31 March 2026 04:57:42 +0000 (0:00:00.245) 0:23:15.215 ********* 2026-03-31 04:57:44.911094 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:57:44.911103 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:57:44.911113 | orchestrator | 2026-03-31 04:57:44.911123 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-31 04:57:44.911132 | orchestrator | Tuesday 31 March 2026 04:57:42 +0000 (0:00:00.238) 0:23:15.454 ********* 2026-03-31 04:57:44.911142 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:57:44.911152 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:57:44.911162 | orchestrator | 2026-03-31 04:57:44.911171 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-31 04:57:44.911181 | orchestrator | Tuesday 31 March 2026 04:57:43 +0000 (0:00:00.313) 0:23:15.768 ********* 2026-03-31 04:57:44.911191 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:57:44.911200 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:57:44.911210 | orchestrator | 2026-03-31 04:57:44.911220 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-31 04:57:44.911229 | orchestrator | Tuesday 31 March 2026 04:57:43 +0000 (0:00:00.238) 0:23:16.006 ********* 2026-03-31 04:57:44.911239 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:57:44.911249 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:57:44.911258 | orchestrator | 2026-03-31 04:57:44.911268 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-31 04:57:44.911278 | orchestrator | Tuesday 31 March 2026 04:57:44 +0000 (0:00:00.794) 0:23:16.800 ********* 2026-03-31 04:57:44.911287 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:57:44.911297 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:57:44.911306 | orchestrator | 2026-03-31 04:57:44.911316 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-31 04:57:44.911326 | orchestrator | Tuesday 31 March 2026 04:57:44 +0000 (0:00:00.248) 0:23:17.048 ********* 2026-03-31 04:57:44.911336 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:57:44.911346 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:57:44.911356 | orchestrator | 2026-03-31 04:57:44.911366 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-31 04:57:44.911375 | orchestrator | Tuesday 31 March 2026 04:57:44 +0000 (0:00:00.289) 0:23:17.338 ********* 2026-03-31 04:57:44.911387 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:57:44.911421 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--da0b55d5--13d5--528b--aee2--5667f342587c-osd--block--da0b55d5--13d5--528b--aee2--5667f342587c', 'dm-uuid-LVM-voIvMScBNf0nn1UqP6J3mrL57Feo8hpsEfbBIXBLL2lbnvB5fpXdf3Vs7Oc4nA8j'], 'uuids': ['26974dbf-f0a7-4ca8-8b18-f9eb0862be76'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'aca90cda', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['EfbBIX-BLL2-lbnv-B5fp-Xdf3-Vs7O-c4nA8j']}})  2026-03-31 04:57:44.911442 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a64e844-a251-4ee7-a817-d55da64d6351', 'scsi-SQEMU_QEMU_HARDDISK_5a64e844-a251-4ee7-a817-d55da64d6351'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5a64e844', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-31 04:57:44.911459 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-jppFpT-6287-H5UX-wadw-idvL-aDwi-H3fsQH', 'scsi-0QEMU_QEMU_HARDDISK_627ac388-afe2-405e-bfb6-93a96eeb5247', 'scsi-SQEMU_QEMU_HARDDISK_627ac388-afe2-405e-bfb6-93a96eeb5247'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '627ac388', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb-osd--block--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb']}})  2026-03-31 04:57:44.911472 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:57:44.911490 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:57:44.911508 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-31-01-38-47-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-31 04:57:44.911525 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:57:44.911551 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-jN9Ywl-XbnL-hNii-unic-nne9-TiGA-xFnCN2', 'dm-uuid-CRYPT-LUKS2-c911a2b9ffbe4994aafa7327c1153c91-jN9Ywl-XbnL-hNii-unic-nne9-TiGA-xFnCN2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-31 04:57:45.033035 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:57:45.033170 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb-osd--block--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb', 'dm-uuid-LVM-RwD1SDPPywNrcOLsCdJUWJCkPqisEw7IjN9YwlXbnLhNiiunicnne9TiGAxFnCN2'], 'uuids': ['c911a2b9-ffbe-4994-aafa-7327c1153c91'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '627ac388', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['jN9Ywl-XbnL-hNii-unic-nne9-TiGA-xFnCN2']}})  2026-03-31 04:57:45.033216 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:57:45.033233 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-pfZnnD-Ultt-g92I-R3gj-okuR-Ezub-rBAf3f', 'scsi-0QEMU_QEMU_HARDDISK_aca90cda-810a-4a3a-a8a4-a9246b552814', 'scsi-SQEMU_QEMU_HARDDISK_aca90cda-810a-4a3a-a8a4-a9246b552814'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'aca90cda', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--da0b55d5--13d5--528b--aee2--5667f342587c-osd--block--da0b55d5--13d5--528b--aee2--5667f342587c']}})  2026-03-31 04:57:45.033246 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--67174221--9040--517a--ae84--daf8ebd704d7-osd--block--67174221--9040--517a--ae84--daf8ebd704d7', 'dm-uuid-LVM-KejqHBdnFtLSyyC9R84nyz1yANxrpRIXzilsodjHoTjpW17LoAebYG18loNV682y'], 'uuids': ['e0243936-4e5c-4d79-8eb8-83df85650a2f'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c466d3ef', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['zilsod-jHoT-jpW1-7LoA-ebYG-18lo-NV682y']}})  2026-03-31 04:57:45.033258 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:57:45.033290 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a878a648-90f8-45a8-8930-74e801ae2e4e', 'scsi-SQEMU_QEMU_HARDDISK_a878a648-90f8-45a8-8930-74e801ae2e4e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a878a648', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-31 04:57:45.033336 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9459331e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part16', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part14', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part15', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part1', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-31 04:57:45.033351 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-lFSq2g-b3FP-rBDh-oytj-DsQd-47zI-8ZR1ba', 'scsi-0QEMU_QEMU_HARDDISK_820fa545-b298-47e1-b072-447ef233e5c9', 'scsi-SQEMU_QEMU_HARDDISK_820fa545-b298-47e1-b072-447ef233e5c9'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '820fa545', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--dad98f55--09f4--5a2b--a5c7--aafce2660c53-osd--block--dad98f55--09f4--5a2b--a5c7--aafce2660c53']}})  2026-03-31 04:57:45.033363 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:57:45.033376 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:57:45.033403 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:57:45.176552 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-EfbBIX-BLL2-lbnv-B5fp-Xdf3-Vs7O-c4nA8j', 'dm-uuid-CRYPT-LUKS2-26974dbff0a74ca88b18f9eb0862be76-EfbBIX-BLL2-lbnv-B5fp-Xdf3-Vs7O-c4nA8j'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-31 04:57:45.176663 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:57:45.176730 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-31-01-38-49-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-31 04:57:45.176744 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:57:45.176758 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:57:45.176770 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ttbUQt-J3i2-5YBf-d39y-c024-Mn1f-tAcrtm', 'dm-uuid-CRYPT-LUKS2-c1688bff06c1489bb542bf83ea59d0b8-ttbUQt-J3i2-5YBf-d39y-c024-Mn1f-tAcrtm'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-31 04:57:45.176782 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:57:45.176816 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--dad98f55--09f4--5a2b--a5c7--aafce2660c53-osd--block--dad98f55--09f4--5a2b--a5c7--aafce2660c53', 'dm-uuid-LVM-3PGokd0XE9nIVZhiheUbxNcBNNscsDrxttbUQtJ3i25YBfd39yc024Mn1ftAcrtm'], 'uuids': ['c1688bff-06c1-489b-b542-bf83ea59d0b8'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '820fa545', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['ttbUQt-J3i2-5YBf-d39y-c024-Mn1f-tAcrtm']}})  2026-03-31 04:57:45.176849 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ysmeMC-hqe7-I7iJ-JTkz-gYYz-B5UB-UbMPzu', 'scsi-0QEMU_QEMU_HARDDISK_c466d3ef-6614-47a1-86d1-ef83336ce84c', 'scsi-SQEMU_QEMU_HARDDISK_c466d3ef-6614-47a1-86d1-ef83336ce84c'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c466d3ef', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--67174221--9040--517a--ae84--daf8ebd704d7-osd--block--67174221--9040--517a--ae84--daf8ebd704d7']}})  2026-03-31 04:57:45.176862 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:57:45.176884 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '53e77e6d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part16', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part14', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part15', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part1', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-31 04:57:45.176906 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:57:45.176919 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:57:45.176937 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-zilsod-jHoT-jpW1-7LoA-ebYG-18lo-NV682y', 'dm-uuid-CRYPT-LUKS2-e02439364e5c4d798eb883df85650a2f-zilsod-jHoT-jpW1-7LoA-ebYG-18lo-NV682y'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-31 04:57:45.431711 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:57:45.431808 | orchestrator | 2026-03-31 04:57:45.431824 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-31 04:57:45.431836 | orchestrator | Tuesday 31 March 2026 04:57:45 +0000 (0:00:00.510) 0:23:17.848 ********* 2026-03-31 04:57:45.431867 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:57:45.431884 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--da0b55d5--13d5--528b--aee2--5667f342587c-osd--block--da0b55d5--13d5--528b--aee2--5667f342587c', 'dm-uuid-LVM-voIvMScBNf0nn1UqP6J3mrL57Feo8hpsEfbBIXBLL2lbnvB5fpXdf3Vs7Oc4nA8j'], 'uuids': ['26974dbf-f0a7-4ca8-8b18-f9eb0862be76'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'aca90cda', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['EfbBIX-BLL2-lbnv-B5fp-Xdf3-Vs7O-c4nA8j']}}, 'ansible_loop_var': 'item'})  2026-03-31 04:57:45.431898 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a64e844-a251-4ee7-a817-d55da64d6351', 'scsi-SQEMU_QEMU_HARDDISK_5a64e844-a251-4ee7-a817-d55da64d6351'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5a64e844', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:57:45.431932 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-jppFpT-6287-H5UX-wadw-idvL-aDwi-H3fsQH', 'scsi-0QEMU_QEMU_HARDDISK_627ac388-afe2-405e-bfb6-93a96eeb5247', 'scsi-SQEMU_QEMU_HARDDISK_627ac388-afe2-405e-bfb6-93a96eeb5247'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '627ac388', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb-osd--block--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb']}}, 'ansible_loop_var': 'item'})  2026-03-31 04:57:45.431966 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:57:45.431984 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:57:45.431997 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-31-01-38-47-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:57:45.432010 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:57:45.432021 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-jN9Ywl-XbnL-hNii-unic-nne9-TiGA-xFnCN2', 'dm-uuid-CRYPT-LUKS2-c911a2b9ffbe4994aafa7327c1153c91-jN9Ywl-XbnL-hNii-unic-nne9-TiGA-xFnCN2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:57:45.432041 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:57:45.432060 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb-osd--block--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb', 'dm-uuid-LVM-RwD1SDPPywNrcOLsCdJUWJCkPqisEw7IjN9YwlXbnLhNiiunicnne9TiGAxFnCN2'], 'uuids': ['c911a2b9-ffbe-4994-aafa-7327c1153c91'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '627ac388', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['jN9Ywl-XbnL-hNii-unic-nne9-TiGA-xFnCN2']}}, 'ansible_loop_var': 'item'})  2026-03-31 04:57:45.500513 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:57:45.500631 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-pfZnnD-Ultt-g92I-R3gj-okuR-Ezub-rBAf3f', 'scsi-0QEMU_QEMU_HARDDISK_aca90cda-810a-4a3a-a8a4-a9246b552814', 'scsi-SQEMU_QEMU_HARDDISK_aca90cda-810a-4a3a-a8a4-a9246b552814'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'aca90cda', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--da0b55d5--13d5--528b--aee2--5667f342587c-osd--block--da0b55d5--13d5--528b--aee2--5667f342587c']}}, 'ansible_loop_var': 'item'})  2026-03-31 04:57:45.500661 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--67174221--9040--517a--ae84--daf8ebd704d7-osd--block--67174221--9040--517a--ae84--daf8ebd704d7', 'dm-uuid-LVM-KejqHBdnFtLSyyC9R84nyz1yANxrpRIXzilsodjHoTjpW17LoAebYG18loNV682y'], 'uuids': ['e0243936-4e5c-4d79-8eb8-83df85650a2f'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c466d3ef', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['zilsod-jHoT-jpW1-7LoA-ebYG-18lo-NV682y']}}, 'ansible_loop_var': 'item'})  2026-03-31 04:57:45.500740 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:57:45.500762 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a878a648-90f8-45a8-8930-74e801ae2e4e', 'scsi-SQEMU_QEMU_HARDDISK_a878a648-90f8-45a8-8930-74e801ae2e4e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a878a648', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:57:45.500824 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9459331e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part16', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part14', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part15', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part1', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:57:45.500848 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-lFSq2g-b3FP-rBDh-oytj-DsQd-47zI-8ZR1ba', 'scsi-0QEMU_QEMU_HARDDISK_820fa545-b298-47e1-b072-447ef233e5c9', 'scsi-SQEMU_QEMU_HARDDISK_820fa545-b298-47e1-b072-447ef233e5c9'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '820fa545', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--dad98f55--09f4--5a2b--a5c7--aafce2660c53-osd--block--dad98f55--09f4--5a2b--a5c7--aafce2660c53']}}, 'ansible_loop_var': 'item'})  2026-03-31 04:57:45.500860 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:57:45.500879 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:57:45.619387 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:57:45.619483 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-EfbBIX-BLL2-lbnv-B5fp-Xdf3-Vs7O-c4nA8j', 'dm-uuid-CRYPT-LUKS2-26974dbff0a74ca88b18f9eb0862be76-EfbBIX-BLL2-lbnv-B5fp-Xdf3-Vs7O-c4nA8j'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:57:45.619521 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:57:45.619536 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:57:45.619551 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-31-01-38-49-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:57:45.619563 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:57:45.619590 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ttbUQt-J3i2-5YBf-d39y-c024-Mn1f-tAcrtm', 'dm-uuid-CRYPT-LUKS2-c1688bff06c1489bb542bf83ea59d0b8-ttbUQt-J3i2-5YBf-d39y-c024-Mn1f-tAcrtm'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:57:45.619608 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:57:45.619622 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--dad98f55--09f4--5a2b--a5c7--aafce2660c53-osd--block--dad98f55--09f4--5a2b--a5c7--aafce2660c53', 'dm-uuid-LVM-3PGokd0XE9nIVZhiheUbxNcBNNscsDrxttbUQtJ3i25YBfd39yc024Mn1ftAcrtm'], 'uuids': ['c1688bff-06c1-489b-b542-bf83ea59d0b8'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '820fa545', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['ttbUQt-J3i2-5YBf-d39y-c024-Mn1f-tAcrtm']}}, 'ansible_loop_var': 'item'})  2026-03-31 04:57:45.619643 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ysmeMC-hqe7-I7iJ-JTkz-gYYz-B5UB-UbMPzu', 'scsi-0QEMU_QEMU_HARDDISK_c466d3ef-6614-47a1-86d1-ef83336ce84c', 'scsi-SQEMU_QEMU_HARDDISK_c466d3ef-6614-47a1-86d1-ef83336ce84c'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c466d3ef', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--67174221--9040--517a--ae84--daf8ebd704d7-osd--block--67174221--9040--517a--ae84--daf8ebd704d7']}}, 'ansible_loop_var': 'item'})  2026-03-31 04:57:45.619659 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:57:45.619711 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '53e77e6d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part16', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part14', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part15', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part1', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:57:55.554963 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:57:55.555129 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:57:55.555157 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-zilsod-jHoT-jpW1-7LoA-ebYG-18lo-NV682y', 'dm-uuid-CRYPT-LUKS2-e02439364e5c4d798eb883df85650a2f-zilsod-jHoT-jpW1-7LoA-ebYG-18lo-NV682y'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:57:55.555181 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:57:55.555215 | orchestrator | 2026-03-31 04:57:55.555238 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-31 04:57:55.555258 | orchestrator | Tuesday 31 March 2026 04:57:45 +0000 (0:00:00.574) 0:23:18.423 ********* 2026-03-31 04:57:55.555276 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:57:55.555293 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:57:55.555312 | orchestrator | 2026-03-31 04:57:55.555331 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-31 04:57:55.555348 | orchestrator | Tuesday 31 March 2026 04:57:46 +0000 (0:00:00.599) 0:23:19.023 ********* 2026-03-31 04:57:55.555366 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:57:55.555384 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:57:55.555402 | orchestrator | 2026-03-31 04:57:55.555421 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-31 04:57:55.555440 | orchestrator | Tuesday 31 March 2026 04:57:46 +0000 (0:00:00.552) 0:23:19.575 ********* 2026-03-31 04:57:55.555464 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:57:55.555484 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:57:55.555499 | orchestrator | 2026-03-31 04:57:55.555511 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-31 04:57:55.555542 | orchestrator | Tuesday 31 March 2026 04:57:47 +0000 (0:00:00.612) 0:23:20.188 ********* 2026-03-31 04:57:55.555556 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:57:55.555569 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:57:55.555582 | orchestrator | 2026-03-31 04:57:55.555617 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-31 04:57:55.555629 | orchestrator | Tuesday 31 March 2026 04:57:47 +0000 (0:00:00.237) 0:23:20.426 ********* 2026-03-31 04:57:55.555640 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:57:55.555651 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:57:55.555663 | orchestrator | 2026-03-31 04:57:55.555674 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-31 04:57:55.555736 | orchestrator | Tuesday 31 March 2026 04:57:48 +0000 (0:00:00.346) 0:23:20.772 ********* 2026-03-31 04:57:55.555759 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:57:55.555778 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:57:55.555795 | orchestrator | 2026-03-31 04:57:55.555807 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-31 04:57:55.555818 | orchestrator | Tuesday 31 March 2026 04:57:48 +0000 (0:00:00.256) 0:23:21.028 ********* 2026-03-31 04:57:55.555829 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-31 04:57:55.555840 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-31 04:57:55.555851 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-31 04:57:55.555862 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-31 04:57:55.555873 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-31 04:57:55.555884 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-31 04:57:55.555895 | orchestrator | 2026-03-31 04:57:55.555906 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-31 04:57:55.555917 | orchestrator | Tuesday 31 March 2026 04:57:49 +0000 (0:00:01.148) 0:23:22.177 ********* 2026-03-31 04:57:55.555950 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-31 04:57:55.555963 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-31 04:57:55.555974 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-31 04:57:55.555985 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:57:55.555996 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-31 04:57:55.556007 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-31 04:57:55.556018 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-31 04:57:55.556029 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:57:55.556040 | orchestrator | 2026-03-31 04:57:55.556051 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-31 04:57:55.556063 | orchestrator | Tuesday 31 March 2026 04:57:49 +0000 (0:00:00.266) 0:23:22.443 ********* 2026-03-31 04:57:55.556081 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4, testbed-node-3 2026-03-31 04:57:55.556109 | orchestrator | 2026-03-31 04:57:55.556130 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-31 04:57:55.556150 | orchestrator | Tuesday 31 March 2026 04:57:50 +0000 (0:00:00.799) 0:23:23.242 ********* 2026-03-31 04:57:55.556168 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:57:55.556186 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:57:55.556205 | orchestrator | 2026-03-31 04:57:55.556223 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-31 04:57:55.556241 | orchestrator | Tuesday 31 March 2026 04:57:50 +0000 (0:00:00.240) 0:23:23.483 ********* 2026-03-31 04:57:55.556261 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:57:55.556279 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:57:55.556298 | orchestrator | 2026-03-31 04:57:55.556313 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-31 04:57:55.556324 | orchestrator | Tuesday 31 March 2026 04:57:51 +0000 (0:00:00.255) 0:23:23.738 ********* 2026-03-31 04:57:55.556335 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:57:55.556346 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:57:55.556357 | orchestrator | 2026-03-31 04:57:55.556368 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-31 04:57:55.556392 | orchestrator | Tuesday 31 March 2026 04:57:51 +0000 (0:00:00.237) 0:23:23.976 ********* 2026-03-31 04:57:55.556404 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:57:55.556415 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:57:55.556426 | orchestrator | 2026-03-31 04:57:55.556437 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-31 04:57:55.556449 | orchestrator | Tuesday 31 March 2026 04:57:51 +0000 (0:00:00.341) 0:23:24.318 ********* 2026-03-31 04:57:55.556460 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-31 04:57:55.556471 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-31 04:57:55.556482 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-31 04:57:55.556493 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:57:55.556504 | orchestrator | 2026-03-31 04:57:55.556515 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-31 04:57:55.556526 | orchestrator | Tuesday 31 March 2026 04:57:52 +0000 (0:00:00.734) 0:23:25.052 ********* 2026-03-31 04:57:55.556537 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-31 04:57:55.556548 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-31 04:57:55.556559 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-31 04:57:55.556570 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:57:55.556581 | orchestrator | 2026-03-31 04:57:55.556592 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-31 04:57:55.556603 | orchestrator | Tuesday 31 March 2026 04:57:53 +0000 (0:00:01.176) 0:23:26.229 ********* 2026-03-31 04:57:55.556614 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-31 04:57:55.556625 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-31 04:57:55.556644 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-31 04:57:55.556656 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:57:55.556667 | orchestrator | 2026-03-31 04:57:55.556678 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-31 04:57:55.556713 | orchestrator | Tuesday 31 March 2026 04:57:53 +0000 (0:00:00.435) 0:23:26.664 ********* 2026-03-31 04:57:55.556725 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:57:55.556736 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:57:55.556747 | orchestrator | 2026-03-31 04:57:55.556758 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-31 04:57:55.556769 | orchestrator | Tuesday 31 March 2026 04:57:54 +0000 (0:00:00.261) 0:23:26.925 ********* 2026-03-31 04:57:55.556780 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-31 04:57:55.556791 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-31 04:57:55.556802 | orchestrator | 2026-03-31 04:57:55.556813 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-31 04:57:55.556824 | orchestrator | Tuesday 31 March 2026 04:57:54 +0000 (0:00:00.477) 0:23:27.403 ********* 2026-03-31 04:57:55.556835 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:57:55.556847 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:57:55.556858 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:57:55.556869 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-31 04:57:55.556880 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-03-31 04:57:55.556891 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-31 04:57:55.556917 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-31 04:58:09.013324 | orchestrator | 2026-03-31 04:58:09.013409 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-31 04:58:09.013418 | orchestrator | Tuesday 31 March 2026 04:57:55 +0000 (0:00:00.820) 0:23:28.224 ********* 2026-03-31 04:58:09.013440 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:58:09.013447 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:58:09.013452 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:58:09.013458 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-31 04:58:09.013464 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-03-31 04:58:09.013470 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-31 04:58:09.013476 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-31 04:58:09.013482 | orchestrator | 2026-03-31 04:58:09.013488 | orchestrator | TASK [Prevent restarts from the packaging] ************************************* 2026-03-31 04:58:09.013493 | orchestrator | Tuesday 31 March 2026 04:57:57 +0000 (0:00:01.756) 0:23:29.980 ********* 2026-03-31 04:58:09.013499 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:09.013506 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:09.013511 | orchestrator | 2026-03-31 04:58:09.013517 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-31 04:58:09.013522 | orchestrator | Tuesday 31 March 2026 04:57:57 +0000 (0:00:00.268) 0:23:30.249 ********* 2026-03-31 04:58:09.013528 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4, testbed-node-3 2026-03-31 04:58:09.013533 | orchestrator | 2026-03-31 04:58:09.013539 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-31 04:58:09.013544 | orchestrator | Tuesday 31 March 2026 04:57:58 +0000 (0:00:00.772) 0:23:31.021 ********* 2026-03-31 04:58:09.013550 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4, testbed-node-3 2026-03-31 04:58:09.013555 | orchestrator | 2026-03-31 04:58:09.013561 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-31 04:58:09.013566 | orchestrator | Tuesday 31 March 2026 04:57:58 +0000 (0:00:00.380) 0:23:31.402 ********* 2026-03-31 04:58:09.013572 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:09.013577 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:09.013583 | orchestrator | 2026-03-31 04:58:09.013588 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-31 04:58:09.013594 | orchestrator | Tuesday 31 March 2026 04:57:58 +0000 (0:00:00.243) 0:23:31.645 ********* 2026-03-31 04:58:09.013599 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:58:09.013616 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:58:09.013622 | orchestrator | 2026-03-31 04:58:09.013634 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-31 04:58:09.013639 | orchestrator | Tuesday 31 March 2026 04:57:59 +0000 (0:00:00.636) 0:23:32.282 ********* 2026-03-31 04:58:09.013645 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:58:09.013651 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:58:09.013656 | orchestrator | 2026-03-31 04:58:09.013662 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-31 04:58:09.013667 | orchestrator | Tuesday 31 March 2026 04:58:00 +0000 (0:00:00.608) 0:23:32.890 ********* 2026-03-31 04:58:09.013673 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:58:09.013678 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:58:09.013683 | orchestrator | 2026-03-31 04:58:09.013689 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-31 04:58:09.013694 | orchestrator | Tuesday 31 March 2026 04:58:01 +0000 (0:00:01.010) 0:23:33.901 ********* 2026-03-31 04:58:09.013742 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:09.013748 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:09.013753 | orchestrator | 2026-03-31 04:58:09.013759 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-31 04:58:09.013776 | orchestrator | Tuesday 31 March 2026 04:58:01 +0000 (0:00:00.240) 0:23:34.141 ********* 2026-03-31 04:58:09.013790 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:09.013795 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:09.013801 | orchestrator | 2026-03-31 04:58:09.013806 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-31 04:58:09.013812 | orchestrator | Tuesday 31 March 2026 04:58:01 +0000 (0:00:00.233) 0:23:34.375 ********* 2026-03-31 04:58:09.013817 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:09.013823 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:09.013828 | orchestrator | 2026-03-31 04:58:09.013834 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-31 04:58:09.013839 | orchestrator | Tuesday 31 March 2026 04:58:01 +0000 (0:00:00.229) 0:23:34.604 ********* 2026-03-31 04:58:09.013844 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:58:09.013850 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:58:09.013855 | orchestrator | 2026-03-31 04:58:09.013861 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-31 04:58:09.013866 | orchestrator | Tuesday 31 March 2026 04:58:02 +0000 (0:00:00.660) 0:23:35.264 ********* 2026-03-31 04:58:09.013872 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:58:09.013877 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:58:09.013883 | orchestrator | 2026-03-31 04:58:09.013888 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-31 04:58:09.013894 | orchestrator | Tuesday 31 March 2026 04:58:03 +0000 (0:00:00.641) 0:23:35.906 ********* 2026-03-31 04:58:09.013900 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:09.013907 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:09.013913 | orchestrator | 2026-03-31 04:58:09.013920 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-31 04:58:09.013927 | orchestrator | Tuesday 31 March 2026 04:58:03 +0000 (0:00:00.207) 0:23:36.113 ********* 2026-03-31 04:58:09.013933 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:09.013950 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:09.013958 | orchestrator | 2026-03-31 04:58:09.013964 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-31 04:58:09.013971 | orchestrator | Tuesday 31 March 2026 04:58:03 +0000 (0:00:00.560) 0:23:36.674 ********* 2026-03-31 04:58:09.013977 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:58:09.013984 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:58:09.013990 | orchestrator | 2026-03-31 04:58:09.013997 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-31 04:58:09.014004 | orchestrator | Tuesday 31 March 2026 04:58:04 +0000 (0:00:00.269) 0:23:36.944 ********* 2026-03-31 04:58:09.014010 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:58:09.014054 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:58:09.014083 | orchestrator | 2026-03-31 04:58:09.014091 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-31 04:58:09.014097 | orchestrator | Tuesday 31 March 2026 04:58:04 +0000 (0:00:00.256) 0:23:37.200 ********* 2026-03-31 04:58:09.014104 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:58:09.014111 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:58:09.014118 | orchestrator | 2026-03-31 04:58:09.014125 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-31 04:58:09.014132 | orchestrator | Tuesday 31 March 2026 04:58:04 +0000 (0:00:00.268) 0:23:37.468 ********* 2026-03-31 04:58:09.014140 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:09.014146 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:09.014153 | orchestrator | 2026-03-31 04:58:09.014160 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-31 04:58:09.014167 | orchestrator | Tuesday 31 March 2026 04:58:05 +0000 (0:00:00.244) 0:23:37.713 ********* 2026-03-31 04:58:09.014174 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:09.014181 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:09.014188 | orchestrator | 2026-03-31 04:58:09.014195 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-31 04:58:09.014208 | orchestrator | Tuesday 31 March 2026 04:58:05 +0000 (0:00:00.243) 0:23:37.957 ********* 2026-03-31 04:58:09.014215 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:09.014221 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:09.014228 | orchestrator | 2026-03-31 04:58:09.014235 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-31 04:58:09.014243 | orchestrator | Tuesday 31 March 2026 04:58:05 +0000 (0:00:00.522) 0:23:38.480 ********* 2026-03-31 04:58:09.014250 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:58:09.014257 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:58:09.014263 | orchestrator | 2026-03-31 04:58:09.014268 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-31 04:58:09.014274 | orchestrator | Tuesday 31 March 2026 04:58:06 +0000 (0:00:00.259) 0:23:38.739 ********* 2026-03-31 04:58:09.014280 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:58:09.014286 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:58:09.014292 | orchestrator | 2026-03-31 04:58:09.014298 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-31 04:58:09.014303 | orchestrator | Tuesday 31 March 2026 04:58:06 +0000 (0:00:00.385) 0:23:39.125 ********* 2026-03-31 04:58:09.014309 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:09.014315 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:09.014321 | orchestrator | 2026-03-31 04:58:09.014327 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-31 04:58:09.014332 | orchestrator | Tuesday 31 March 2026 04:58:06 +0000 (0:00:00.284) 0:23:39.410 ********* 2026-03-31 04:58:09.014338 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:09.014344 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:09.014350 | orchestrator | 2026-03-31 04:58:09.014356 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-31 04:58:09.014361 | orchestrator | Tuesday 31 March 2026 04:58:06 +0000 (0:00:00.233) 0:23:39.644 ********* 2026-03-31 04:58:09.014367 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:09.014373 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:09.014379 | orchestrator | 2026-03-31 04:58:09.014384 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-31 04:58:09.014390 | orchestrator | Tuesday 31 March 2026 04:58:07 +0000 (0:00:00.244) 0:23:39.888 ********* 2026-03-31 04:58:09.014396 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:09.014405 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:09.014411 | orchestrator | 2026-03-31 04:58:09.014417 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-31 04:58:09.014423 | orchestrator | Tuesday 31 March 2026 04:58:07 +0000 (0:00:00.628) 0:23:40.516 ********* 2026-03-31 04:58:09.014429 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:09.014434 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:09.014440 | orchestrator | 2026-03-31 04:58:09.014446 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-31 04:58:09.014452 | orchestrator | Tuesday 31 March 2026 04:58:08 +0000 (0:00:00.239) 0:23:40.756 ********* 2026-03-31 04:58:09.014457 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:09.014465 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:09.014474 | orchestrator | 2026-03-31 04:58:09.014483 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-31 04:58:09.014492 | orchestrator | Tuesday 31 March 2026 04:58:08 +0000 (0:00:00.237) 0:23:40.994 ********* 2026-03-31 04:58:09.014501 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:09.014509 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:09.014518 | orchestrator | 2026-03-31 04:58:09.014527 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-31 04:58:09.014535 | orchestrator | Tuesday 31 March 2026 04:58:08 +0000 (0:00:00.232) 0:23:41.226 ********* 2026-03-31 04:58:09.014543 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:09.014551 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:09.014567 | orchestrator | 2026-03-31 04:58:09.014575 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-31 04:58:09.014585 | orchestrator | Tuesday 31 March 2026 04:58:08 +0000 (0:00:00.226) 0:23:41.453 ********* 2026-03-31 04:58:09.014594 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:09.014602 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:09.014611 | orchestrator | 2026-03-31 04:58:09.014627 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-31 04:58:24.049612 | orchestrator | Tuesday 31 March 2026 04:58:08 +0000 (0:00:00.219) 0:23:41.673 ********* 2026-03-31 04:58:24.049839 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:24.049875 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:24.049895 | orchestrator | 2026-03-31 04:58:24.049908 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-31 04:58:24.049921 | orchestrator | Tuesday 31 March 2026 04:58:09 +0000 (0:00:00.231) 0:23:41.904 ********* 2026-03-31 04:58:24.049932 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:24.049943 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:24.049954 | orchestrator | 2026-03-31 04:58:24.049965 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-31 04:58:24.049976 | orchestrator | Tuesday 31 March 2026 04:58:09 +0000 (0:00:00.642) 0:23:42.546 ********* 2026-03-31 04:58:24.049988 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:24.049999 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:24.050010 | orchestrator | 2026-03-31 04:58:24.050088 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-31 04:58:24.050100 | orchestrator | Tuesday 31 March 2026 04:58:10 +0000 (0:00:00.414) 0:23:42.961 ********* 2026-03-31 04:58:24.050112 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:58:24.050124 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:58:24.050149 | orchestrator | 2026-03-31 04:58:24.050162 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-31 04:58:24.050175 | orchestrator | Tuesday 31 March 2026 04:58:11 +0000 (0:00:01.074) 0:23:44.035 ********* 2026-03-31 04:58:24.050189 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:58:24.050202 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:58:24.050213 | orchestrator | 2026-03-31 04:58:24.050225 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-31 04:58:24.050236 | orchestrator | Tuesday 31 March 2026 04:58:12 +0000 (0:00:01.372) 0:23:45.408 ********* 2026-03-31 04:58:24.050248 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4, testbed-node-3 2026-03-31 04:58:24.050259 | orchestrator | 2026-03-31 04:58:24.050271 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-31 04:58:24.050282 | orchestrator | Tuesday 31 March 2026 04:58:13 +0000 (0:00:00.644) 0:23:46.052 ********* 2026-03-31 04:58:24.050293 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:24.050304 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:24.050315 | orchestrator | 2026-03-31 04:58:24.050327 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-31 04:58:24.050338 | orchestrator | Tuesday 31 March 2026 04:58:13 +0000 (0:00:00.241) 0:23:46.293 ********* 2026-03-31 04:58:24.050349 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:24.050360 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:24.050371 | orchestrator | 2026-03-31 04:58:24.050383 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-31 04:58:24.050394 | orchestrator | Tuesday 31 March 2026 04:58:13 +0000 (0:00:00.243) 0:23:46.537 ********* 2026-03-31 04:58:24.050405 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-31 04:58:24.050416 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-31 04:58:24.050427 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-31 04:58:24.050438 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-31 04:58:24.050475 | orchestrator | 2026-03-31 04:58:24.050487 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-31 04:58:24.050498 | orchestrator | Tuesday 31 March 2026 04:58:14 +0000 (0:00:00.915) 0:23:47.452 ********* 2026-03-31 04:58:24.050509 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:58:24.050520 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:58:24.050531 | orchestrator | 2026-03-31 04:58:24.050542 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-31 04:58:24.050554 | orchestrator | Tuesday 31 March 2026 04:58:15 +0000 (0:00:00.578) 0:23:48.031 ********* 2026-03-31 04:58:24.050579 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:24.050590 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:24.050602 | orchestrator | 2026-03-31 04:58:24.050613 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-31 04:58:24.050624 | orchestrator | Tuesday 31 March 2026 04:58:15 +0000 (0:00:00.259) 0:23:48.290 ********* 2026-03-31 04:58:24.050635 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:24.050646 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:24.050657 | orchestrator | 2026-03-31 04:58:24.050668 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-31 04:58:24.050679 | orchestrator | Tuesday 31 March 2026 04:58:15 +0000 (0:00:00.241) 0:23:48.532 ********* 2026-03-31 04:58:24.050690 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:24.050701 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:24.050712 | orchestrator | 2026-03-31 04:58:24.050750 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-31 04:58:24.050761 | orchestrator | Tuesday 31 March 2026 04:58:16 +0000 (0:00:00.533) 0:23:49.066 ********* 2026-03-31 04:58:24.050772 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4, testbed-node-3 2026-03-31 04:58:24.050783 | orchestrator | 2026-03-31 04:58:24.050794 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-31 04:58:24.050805 | orchestrator | Tuesday 31 March 2026 04:58:16 +0000 (0:00:00.404) 0:23:49.470 ********* 2026-03-31 04:58:24.050816 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:58:24.050827 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:58:24.050838 | orchestrator | 2026-03-31 04:58:24.050849 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-31 04:58:24.050860 | orchestrator | Tuesday 31 March 2026 04:58:17 +0000 (0:00:00.796) 0:23:50.266 ********* 2026-03-31 04:58:24.050872 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-31 04:58:24.050904 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-31 04:58:24.050916 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-31 04:58:24.050927 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:24.050938 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-31 04:58:24.050950 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-31 04:58:24.050961 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-31 04:58:24.050972 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:24.050983 | orchestrator | 2026-03-31 04:58:24.050995 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-31 04:58:24.051006 | orchestrator | Tuesday 31 March 2026 04:58:17 +0000 (0:00:00.250) 0:23:50.517 ********* 2026-03-31 04:58:24.051017 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:24.051028 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:24.051039 | orchestrator | 2026-03-31 04:58:24.051050 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-31 04:58:24.051061 | orchestrator | Tuesday 31 March 2026 04:58:18 +0000 (0:00:00.231) 0:23:50.749 ********* 2026-03-31 04:58:24.051081 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:24.051093 | orchestrator | 2026-03-31 04:58:24.051104 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-31 04:58:24.051115 | orchestrator | Tuesday 31 March 2026 04:58:18 +0000 (0:00:00.171) 0:23:50.921 ********* 2026-03-31 04:58:24.051126 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:24.051137 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:24.051148 | orchestrator | 2026-03-31 04:58:24.051159 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-31 04:58:24.051171 | orchestrator | Tuesday 31 March 2026 04:58:18 +0000 (0:00:00.553) 0:23:51.474 ********* 2026-03-31 04:58:24.051182 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:24.051193 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:24.051204 | orchestrator | 2026-03-31 04:58:24.051215 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-31 04:58:24.051226 | orchestrator | Tuesday 31 March 2026 04:58:19 +0000 (0:00:00.252) 0:23:51.727 ********* 2026-03-31 04:58:24.051237 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:24.051248 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:24.051259 | orchestrator | 2026-03-31 04:58:24.051270 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-31 04:58:24.051281 | orchestrator | Tuesday 31 March 2026 04:58:19 +0000 (0:00:00.273) 0:23:52.000 ********* 2026-03-31 04:58:24.051292 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:58:24.051303 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:58:24.051314 | orchestrator | 2026-03-31 04:58:24.051325 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-31 04:58:24.051336 | orchestrator | Tuesday 31 March 2026 04:58:20 +0000 (0:00:01.557) 0:23:53.557 ********* 2026-03-31 04:58:24.051347 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:58:24.051358 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:58:24.051369 | orchestrator | 2026-03-31 04:58:24.051380 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-31 04:58:24.051391 | orchestrator | Tuesday 31 March 2026 04:58:21 +0000 (0:00:00.282) 0:23:53.839 ********* 2026-03-31 04:58:24.051402 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4, testbed-node-3 2026-03-31 04:58:24.051420 | orchestrator | 2026-03-31 04:58:24.051439 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-31 04:58:24.051459 | orchestrator | Tuesday 31 March 2026 04:58:21 +0000 (0:00:00.748) 0:23:54.588 ********* 2026-03-31 04:58:24.051478 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:24.051496 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:24.051515 | orchestrator | 2026-03-31 04:58:24.051531 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-31 04:58:24.051549 | orchestrator | Tuesday 31 March 2026 04:58:22 +0000 (0:00:00.242) 0:23:54.831 ********* 2026-03-31 04:58:24.051578 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:24.051599 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:24.051618 | orchestrator | 2026-03-31 04:58:24.051638 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-31 04:58:24.051654 | orchestrator | Tuesday 31 March 2026 04:58:22 +0000 (0:00:00.293) 0:23:55.125 ********* 2026-03-31 04:58:24.051665 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:24.051676 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:24.051687 | orchestrator | 2026-03-31 04:58:24.051699 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-31 04:58:24.051741 | orchestrator | Tuesday 31 March 2026 04:58:22 +0000 (0:00:00.251) 0:23:55.377 ********* 2026-03-31 04:58:24.051761 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:24.051780 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:24.051796 | orchestrator | 2026-03-31 04:58:24.051816 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-31 04:58:24.051834 | orchestrator | Tuesday 31 March 2026 04:58:22 +0000 (0:00:00.280) 0:23:55.657 ********* 2026-03-31 04:58:24.051867 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:24.051885 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:24.051905 | orchestrator | 2026-03-31 04:58:24.051919 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-31 04:58:24.051930 | orchestrator | Tuesday 31 March 2026 04:58:23 +0000 (0:00:00.259) 0:23:55.917 ********* 2026-03-31 04:58:24.051942 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:24.051953 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:24.051963 | orchestrator | 2026-03-31 04:58:24.051975 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-31 04:58:24.051986 | orchestrator | Tuesday 31 March 2026 04:58:23 +0000 (0:00:00.239) 0:23:56.156 ********* 2026-03-31 04:58:24.051997 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:24.052008 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:24.052019 | orchestrator | 2026-03-31 04:58:24.052041 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-31 04:58:44.216122 | orchestrator | Tuesday 31 March 2026 04:58:24 +0000 (0:00:00.556) 0:23:56.713 ********* 2026-03-31 04:58:44.216238 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:44.216275 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:44.216289 | orchestrator | 2026-03-31 04:58:44.216314 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-31 04:58:44.216326 | orchestrator | Tuesday 31 March 2026 04:58:24 +0000 (0:00:00.247) 0:23:56.961 ********* 2026-03-31 04:58:44.216338 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:58:44.216350 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:58:44.216361 | orchestrator | 2026-03-31 04:58:44.216373 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-31 04:58:44.216384 | orchestrator | Tuesday 31 March 2026 04:58:24 +0000 (0:00:00.400) 0:23:57.361 ********* 2026-03-31 04:58:44.216396 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4, testbed-node-3 2026-03-31 04:58:44.216407 | orchestrator | 2026-03-31 04:58:44.216418 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-31 04:58:44.216429 | orchestrator | Tuesday 31 March 2026 04:58:25 +0000 (0:00:00.368) 0:23:57.730 ********* 2026-03-31 04:58:44.216440 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-03-31 04:58:44.216452 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-03-31 04:58:44.216463 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-31 04:58:44.216474 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-31 04:58:44.216485 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-31 04:58:44.216495 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-31 04:58:44.216506 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-31 04:58:44.216517 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-31 04:58:44.216528 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-31 04:58:44.216538 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-31 04:58:44.216549 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-31 04:58:44.216560 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-31 04:58:44.216571 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-31 04:58:44.216582 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-31 04:58:44.216592 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-31 04:58:44.216604 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-31 04:58:44.216615 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-31 04:58:44.216626 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-31 04:58:44.216637 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-31 04:58:44.216648 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-31 04:58:44.216683 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-31 04:58:44.216694 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-31 04:58:44.216705 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-31 04:58:44.216716 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-31 04:58:44.216726 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-31 04:58:44.216763 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-31 04:58:44.216775 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-31 04:58:44.216786 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-31 04:58:44.216796 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-03-31 04:58:44.216807 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-03-31 04:58:44.216833 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-03-31 04:58:44.216844 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-03-31 04:58:44.216855 | orchestrator | 2026-03-31 04:58:44.216866 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-31 04:58:44.216877 | orchestrator | Tuesday 31 March 2026 04:58:30 +0000 (0:00:05.620) 0:24:03.351 ********* 2026-03-31 04:58:44.216888 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4, testbed-node-3 2026-03-31 04:58:44.216899 | orchestrator | 2026-03-31 04:58:44.216910 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-31 04:58:44.216921 | orchestrator | Tuesday 31 March 2026 04:58:31 +0000 (0:00:00.671) 0:24:04.022 ********* 2026-03-31 04:58:44.216932 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-31 04:58:44.216944 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-31 04:58:44.216955 | orchestrator | 2026-03-31 04:58:44.216966 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-31 04:58:44.216977 | orchestrator | Tuesday 31 March 2026 04:58:31 +0000 (0:00:00.599) 0:24:04.622 ********* 2026-03-31 04:58:44.216988 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-31 04:58:44.216999 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-31 04:58:44.217010 | orchestrator | 2026-03-31 04:58:44.217021 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-31 04:58:44.217051 | orchestrator | Tuesday 31 March 2026 04:58:32 +0000 (0:00:01.054) 0:24:05.676 ********* 2026-03-31 04:58:44.217063 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:44.217074 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:44.217085 | orchestrator | 2026-03-31 04:58:44.217096 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-31 04:58:44.217107 | orchestrator | Tuesday 31 March 2026 04:58:33 +0000 (0:00:00.225) 0:24:05.901 ********* 2026-03-31 04:58:44.217117 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:44.217128 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:44.217139 | orchestrator | 2026-03-31 04:58:44.217150 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-31 04:58:44.217161 | orchestrator | Tuesday 31 March 2026 04:58:33 +0000 (0:00:00.245) 0:24:06.146 ********* 2026-03-31 04:58:44.217172 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:44.217183 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:44.217193 | orchestrator | 2026-03-31 04:58:44.217204 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-31 04:58:44.217215 | orchestrator | Tuesday 31 March 2026 04:58:33 +0000 (0:00:00.508) 0:24:06.654 ********* 2026-03-31 04:58:44.217235 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:44.217246 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:44.217257 | orchestrator | 2026-03-31 04:58:44.217267 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-31 04:58:44.217278 | orchestrator | Tuesday 31 March 2026 04:58:34 +0000 (0:00:00.232) 0:24:06.887 ********* 2026-03-31 04:58:44.217289 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:44.217300 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:44.217311 | orchestrator | 2026-03-31 04:58:44.217322 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-31 04:58:44.217333 | orchestrator | Tuesday 31 March 2026 04:58:34 +0000 (0:00:00.240) 0:24:07.128 ********* 2026-03-31 04:58:44.217344 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:44.217355 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:44.217366 | orchestrator | 2026-03-31 04:58:44.217377 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-31 04:58:44.217388 | orchestrator | Tuesday 31 March 2026 04:58:34 +0000 (0:00:00.247) 0:24:07.375 ********* 2026-03-31 04:58:44.217399 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:44.217410 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:44.217421 | orchestrator | 2026-03-31 04:58:44.217432 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-31 04:58:44.217443 | orchestrator | Tuesday 31 March 2026 04:58:34 +0000 (0:00:00.244) 0:24:07.619 ********* 2026-03-31 04:58:44.217454 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:44.217465 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:44.217476 | orchestrator | 2026-03-31 04:58:44.217487 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-31 04:58:44.217498 | orchestrator | Tuesday 31 March 2026 04:58:35 +0000 (0:00:00.239) 0:24:07.859 ********* 2026-03-31 04:58:44.217509 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:44.217519 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:44.217530 | orchestrator | 2026-03-31 04:58:44.217541 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-31 04:58:44.217552 | orchestrator | Tuesday 31 March 2026 04:58:35 +0000 (0:00:00.276) 0:24:08.136 ********* 2026-03-31 04:58:44.217563 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:44.217574 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:44.217585 | orchestrator | 2026-03-31 04:58:44.217596 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-31 04:58:44.217607 | orchestrator | Tuesday 31 March 2026 04:58:35 +0000 (0:00:00.540) 0:24:08.676 ********* 2026-03-31 04:58:44.217618 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:58:44.217629 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:58:44.217640 | orchestrator | 2026-03-31 04:58:44.217651 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-31 04:58:44.217667 | orchestrator | Tuesday 31 March 2026 04:58:36 +0000 (0:00:00.279) 0:24:08.956 ********* 2026-03-31 04:58:44.217678 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-03-31 04:58:44.217689 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-31 04:58:44.217700 | orchestrator | 2026-03-31 04:58:44.217711 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-31 04:58:44.217722 | orchestrator | Tuesday 31 March 2026 04:58:39 +0000 (0:00:03.678) 0:24:12.635 ********* 2026-03-31 04:58:44.217733 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-31 04:58:44.217759 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-31 04:58:44.217771 | orchestrator | 2026-03-31 04:58:44.217782 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-31 04:58:44.217800 | orchestrator | Tuesday 31 March 2026 04:58:40 +0000 (0:00:00.301) 0:24:12.936 ********* 2026-03-31 04:58:44.217813 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-03-31 04:58:44.217834 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-03-31 04:59:05.808033 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-03-31 04:59:05.808134 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-03-31 04:59:05.808146 | orchestrator | 2026-03-31 04:59:05.808155 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-31 04:59:05.808163 | orchestrator | Tuesday 31 March 2026 04:58:44 +0000 (0:00:03.946) 0:24:16.883 ********* 2026-03-31 04:59:05.808170 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:59:05.808177 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:05.808184 | orchestrator | 2026-03-31 04:59:05.808191 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-31 04:59:05.808198 | orchestrator | Tuesday 31 March 2026 04:58:44 +0000 (0:00:00.254) 0:24:17.137 ********* 2026-03-31 04:59:05.808205 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:59:05.808212 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:05.808219 | orchestrator | 2026-03-31 04:59:05.808226 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-31 04:59:05.808234 | orchestrator | Tuesday 31 March 2026 04:58:44 +0000 (0:00:00.242) 0:24:17.379 ********* 2026-03-31 04:59:05.808241 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:59:05.808248 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:05.808255 | orchestrator | 2026-03-31 04:59:05.808261 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-31 04:59:05.808268 | orchestrator | Tuesday 31 March 2026 04:58:45 +0000 (0:00:00.579) 0:24:17.959 ********* 2026-03-31 04:59:05.808275 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:59:05.808282 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:05.808288 | orchestrator | 2026-03-31 04:59:05.808295 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-31 04:59:05.808302 | orchestrator | Tuesday 31 March 2026 04:58:45 +0000 (0:00:00.291) 0:24:18.250 ********* 2026-03-31 04:59:05.808309 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:59:05.808316 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:05.808327 | orchestrator | 2026-03-31 04:59:05.808339 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-31 04:59:05.808347 | orchestrator | Tuesday 31 March 2026 04:58:45 +0000 (0:00:00.258) 0:24:18.509 ********* 2026-03-31 04:59:05.808354 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:59:05.808362 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:59:05.808368 | orchestrator | 2026-03-31 04:59:05.808375 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-31 04:59:05.808400 | orchestrator | Tuesday 31 March 2026 04:58:46 +0000 (0:00:00.373) 0:24:18.882 ********* 2026-03-31 04:59:05.808407 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-31 04:59:05.808414 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-31 04:59:05.808421 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-31 04:59:05.808428 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:59:05.808434 | orchestrator | 2026-03-31 04:59:05.808441 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-31 04:59:05.808459 | orchestrator | Tuesday 31 March 2026 04:58:46 +0000 (0:00:00.430) 0:24:19.312 ********* 2026-03-31 04:59:05.808467 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-31 04:59:05.808474 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-31 04:59:05.808481 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-31 04:59:05.808487 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:59:05.808494 | orchestrator | 2026-03-31 04:59:05.808501 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-31 04:59:05.808507 | orchestrator | Tuesday 31 March 2026 04:58:47 +0000 (0:00:00.412) 0:24:19.725 ********* 2026-03-31 04:59:05.808514 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-31 04:59:05.808521 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-31 04:59:05.808528 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-31 04:59:05.808534 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:59:05.808541 | orchestrator | 2026-03-31 04:59:05.808548 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-31 04:59:05.808554 | orchestrator | Tuesday 31 March 2026 04:58:47 +0000 (0:00:00.390) 0:24:20.116 ********* 2026-03-31 04:59:05.808561 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:59:05.808568 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:59:05.808575 | orchestrator | 2026-03-31 04:59:05.808581 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-31 04:59:05.808588 | orchestrator | Tuesday 31 March 2026 04:58:47 +0000 (0:00:00.254) 0:24:20.371 ********* 2026-03-31 04:59:05.808596 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-31 04:59:05.808604 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-31 04:59:05.808619 | orchestrator | 2026-03-31 04:59:05.808627 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-31 04:59:05.808635 | orchestrator | Tuesday 31 March 2026 04:58:48 +0000 (0:00:00.955) 0:24:21.326 ********* 2026-03-31 04:59:05.808643 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:59:05.808651 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:59:05.808659 | orchestrator | 2026-03-31 04:59:05.808678 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-31 04:59:05.808686 | orchestrator | Tuesday 31 March 2026 04:58:49 +0000 (0:00:00.994) 0:24:22.320 ********* 2026-03-31 04:59:05.808694 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:59:05.808702 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:05.808710 | orchestrator | 2026-03-31 04:59:05.808718 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-31 04:59:05.808725 | orchestrator | Tuesday 31 March 2026 04:58:49 +0000 (0:00:00.223) 0:24:22.544 ********* 2026-03-31 04:59:05.808733 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-4, testbed-node-3 2026-03-31 04:59:05.808741 | orchestrator | 2026-03-31 04:59:05.808749 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-31 04:59:05.808771 | orchestrator | Tuesday 31 March 2026 04:58:50 +0000 (0:00:00.675) 0:24:23.219 ********* 2026-03-31 04:59:05.808780 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-31 04:59:05.808788 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-31 04:59:05.808796 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-03-31 04:59:05.808810 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-03-31 04:59:05.808818 | orchestrator | 2026-03-31 04:59:05.808826 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-31 04:59:05.808834 | orchestrator | Tuesday 31 March 2026 04:58:51 +0000 (0:00:00.950) 0:24:24.169 ********* 2026-03-31 04:59:05.808841 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 04:59:05.808849 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-31 04:59:05.808857 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-31 04:59:05.808865 | orchestrator | 2026-03-31 04:59:05.808873 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-31 04:59:05.808880 | orchestrator | Tuesday 31 March 2026 04:58:53 +0000 (0:00:02.084) 0:24:26.254 ********* 2026-03-31 04:59:05.808888 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-03-31 04:59:05.808896 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-31 04:59:05.808904 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:59:05.808912 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-03-31 04:59:05.808920 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-31 04:59:05.808928 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:59:05.808936 | orchestrator | 2026-03-31 04:59:05.808944 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-31 04:59:05.808952 | orchestrator | Tuesday 31 March 2026 04:58:54 +0000 (0:00:01.015) 0:24:27.270 ********* 2026-03-31 04:59:05.808959 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:59:05.808966 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:59:05.808973 | orchestrator | 2026-03-31 04:59:05.808980 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-31 04:59:05.808987 | orchestrator | Tuesday 31 March 2026 04:58:55 +0000 (0:00:00.631) 0:24:27.901 ********* 2026-03-31 04:59:05.808993 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:59:05.809000 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:05.809006 | orchestrator | 2026-03-31 04:59:05.809013 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-31 04:59:05.809020 | orchestrator | Tuesday 31 March 2026 04:58:55 +0000 (0:00:00.218) 0:24:28.120 ********* 2026-03-31 04:59:05.809027 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-4, testbed-node-3 2026-03-31 04:59:05.809034 | orchestrator | 2026-03-31 04:59:05.809040 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-31 04:59:05.809047 | orchestrator | Tuesday 31 March 2026 04:58:56 +0000 (0:00:00.668) 0:24:28.789 ********* 2026-03-31 04:59:05.809058 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-4, testbed-node-3 2026-03-31 04:59:05.809065 | orchestrator | 2026-03-31 04:59:05.809071 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-31 04:59:05.809078 | orchestrator | Tuesday 31 March 2026 04:58:56 +0000 (0:00:00.387) 0:24:29.177 ********* 2026-03-31 04:59:05.809085 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:59:05.809091 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:59:05.809098 | orchestrator | 2026-03-31 04:59:05.809105 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-31 04:59:05.809112 | orchestrator | Tuesday 31 March 2026 04:58:57 +0000 (0:00:01.142) 0:24:30.319 ********* 2026-03-31 04:59:05.809118 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:59:05.809125 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:59:05.809131 | orchestrator | 2026-03-31 04:59:05.809138 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-31 04:59:05.809145 | orchestrator | Tuesday 31 March 2026 04:58:58 +0000 (0:00:01.017) 0:24:31.336 ********* 2026-03-31 04:59:05.809151 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:59:05.809158 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:59:05.809165 | orchestrator | 2026-03-31 04:59:05.809171 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-31 04:59:05.809182 | orchestrator | Tuesday 31 March 2026 04:58:59 +0000 (0:00:01.309) 0:24:32.646 ********* 2026-03-31 04:59:05.809189 | orchestrator | changed: [testbed-node-4] 2026-03-31 04:59:05.809196 | orchestrator | changed: [testbed-node-3] 2026-03-31 04:59:05.809203 | orchestrator | 2026-03-31 04:59:05.809209 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-31 04:59:05.809216 | orchestrator | Tuesday 31 March 2026 04:59:02 +0000 (0:00:02.490) 0:24:35.136 ********* 2026-03-31 04:59:05.809223 | orchestrator | ok: [testbed-node-4] 2026-03-31 04:59:05.809229 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:59:05.809236 | orchestrator | 2026-03-31 04:59:05.809243 | orchestrator | TASK [Set max_mds] ************************************************************* 2026-03-31 04:59:05.809249 | orchestrator | Tuesday 31 March 2026 04:59:03 +0000 (0:00:01.115) 0:24:36.252 ********* 2026-03-31 04:59:05.809256 | orchestrator | skipping: [testbed-node-4] 2026-03-31 04:59:05.809267 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-31 04:59:12.326372 | orchestrator | 2026-03-31 04:59:12.326480 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-03-31 04:59:12.326498 | orchestrator | 2026-03-31 04:59:12.326511 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-31 04:59:12.326523 | orchestrator | Tuesday 31 March 2026 04:59:05 +0000 (0:00:02.219) 0:24:38.472 ********* 2026-03-31 04:59:12.326534 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-03-31 04:59:12.326545 | orchestrator | 2026-03-31 04:59:12.326556 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-31 04:59:12.326567 | orchestrator | Tuesday 31 March 2026 04:59:06 +0000 (0:00:00.253) 0:24:38.725 ********* 2026-03-31 04:59:12.326579 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:59:12.326591 | orchestrator | 2026-03-31 04:59:12.326602 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-31 04:59:12.326613 | orchestrator | Tuesday 31 March 2026 04:59:06 +0000 (0:00:00.465) 0:24:39.190 ********* 2026-03-31 04:59:12.326624 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:59:12.326635 | orchestrator | 2026-03-31 04:59:12.326646 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-31 04:59:12.326658 | orchestrator | Tuesday 31 March 2026 04:59:06 +0000 (0:00:00.131) 0:24:39.322 ********* 2026-03-31 04:59:12.326668 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:59:12.326680 | orchestrator | 2026-03-31 04:59:12.326691 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-31 04:59:12.326702 | orchestrator | Tuesday 31 March 2026 04:59:07 +0000 (0:00:00.449) 0:24:39.771 ********* 2026-03-31 04:59:12.326713 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:59:12.326724 | orchestrator | 2026-03-31 04:59:12.326735 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-31 04:59:12.326746 | orchestrator | Tuesday 31 March 2026 04:59:07 +0000 (0:00:00.153) 0:24:39.925 ********* 2026-03-31 04:59:12.326757 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:59:12.326800 | orchestrator | 2026-03-31 04:59:12.326813 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-31 04:59:12.326824 | orchestrator | Tuesday 31 March 2026 04:59:07 +0000 (0:00:00.129) 0:24:40.054 ********* 2026-03-31 04:59:12.326835 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:59:12.326846 | orchestrator | 2026-03-31 04:59:12.326857 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-31 04:59:12.326869 | orchestrator | Tuesday 31 March 2026 04:59:07 +0000 (0:00:00.438) 0:24:40.492 ********* 2026-03-31 04:59:12.326881 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:12.326892 | orchestrator | 2026-03-31 04:59:12.326904 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-31 04:59:12.326917 | orchestrator | Tuesday 31 March 2026 04:59:07 +0000 (0:00:00.183) 0:24:40.676 ********* 2026-03-31 04:59:12.326929 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:59:12.326968 | orchestrator | 2026-03-31 04:59:12.326982 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-31 04:59:12.326994 | orchestrator | Tuesday 31 March 2026 04:59:08 +0000 (0:00:00.147) 0:24:40.823 ********* 2026-03-31 04:59:12.327006 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:59:12.327019 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:59:12.327032 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:59:12.327044 | orchestrator | 2026-03-31 04:59:12.327056 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-31 04:59:12.327069 | orchestrator | Tuesday 31 March 2026 04:59:08 +0000 (0:00:00.671) 0:24:41.494 ********* 2026-03-31 04:59:12.327082 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:59:12.327094 | orchestrator | 2026-03-31 04:59:12.327107 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-31 04:59:12.327134 | orchestrator | Tuesday 31 March 2026 04:59:09 +0000 (0:00:00.263) 0:24:41.758 ********* 2026-03-31 04:59:12.327147 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:59:12.327160 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:59:12.327172 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:59:12.327184 | orchestrator | 2026-03-31 04:59:12.327196 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-31 04:59:12.327209 | orchestrator | Tuesday 31 March 2026 04:59:10 +0000 (0:00:01.826) 0:24:43.584 ********* 2026-03-31 04:59:12.327221 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-31 04:59:12.327234 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-31 04:59:12.327246 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-31 04:59:12.327259 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:12.327270 | orchestrator | 2026-03-31 04:59:12.327281 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-31 04:59:12.327292 | orchestrator | Tuesday 31 March 2026 04:59:11 +0000 (0:00:00.426) 0:24:44.010 ********* 2026-03-31 04:59:12.327304 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-31 04:59:12.327319 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-31 04:59:12.327348 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-31 04:59:12.327360 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:12.327371 | orchestrator | 2026-03-31 04:59:12.327382 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-31 04:59:12.327393 | orchestrator | Tuesday 31 March 2026 04:59:11 +0000 (0:00:00.629) 0:24:44.640 ********* 2026-03-31 04:59:12.327407 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 04:59:12.327420 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 04:59:12.327440 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 04:59:12.327452 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:12.327463 | orchestrator | 2026-03-31 04:59:12.327474 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-31 04:59:12.327484 | orchestrator | Tuesday 31 March 2026 04:59:12 +0000 (0:00:00.167) 0:24:44.807 ********* 2026-03-31 04:59:12.327498 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '2a470704af4f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-31 04:59:09.596285', 'end': '2026-03-31 04:59:09.641413', 'delta': '0:00:00.045128', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2a470704af4f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-31 04:59:12.327518 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '72281537ffe8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-31 04:59:10.143211', 'end': '2026-03-31 04:59:10.184856', 'delta': '0:00:00.041645', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['72281537ffe8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-31 04:59:12.327531 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '4f3969f3506a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-31 04:59:10.698293', 'end': '2026-03-31 04:59:10.750968', 'delta': '0:00:00.052675', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['4f3969f3506a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-31 04:59:12.327543 | orchestrator | 2026-03-31 04:59:12.327560 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-31 04:59:16.392932 | orchestrator | Tuesday 31 March 2026 04:59:12 +0000 (0:00:00.190) 0:24:44.998 ********* 2026-03-31 04:59:16.393050 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:59:16.393067 | orchestrator | 2026-03-31 04:59:16.393080 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-31 04:59:16.393090 | orchestrator | Tuesday 31 March 2026 04:59:12 +0000 (0:00:00.269) 0:24:45.267 ********* 2026-03-31 04:59:16.393142 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:16.393155 | orchestrator | 2026-03-31 04:59:16.393190 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-31 04:59:16.393201 | orchestrator | Tuesday 31 March 2026 04:59:12 +0000 (0:00:00.241) 0:24:45.509 ********* 2026-03-31 04:59:16.393211 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:59:16.393221 | orchestrator | 2026-03-31 04:59:16.393231 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-31 04:59:16.393241 | orchestrator | Tuesday 31 March 2026 04:59:12 +0000 (0:00:00.146) 0:24:45.656 ********* 2026-03-31 04:59:16.393251 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-31 04:59:16.393261 | orchestrator | 2026-03-31 04:59:16.393271 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-31 04:59:16.393312 | orchestrator | Tuesday 31 March 2026 04:59:14 +0000 (0:00:01.623) 0:24:47.279 ********* 2026-03-31 04:59:16.393323 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:59:16.393333 | orchestrator | 2026-03-31 04:59:16.393343 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-31 04:59:16.393353 | orchestrator | Tuesday 31 March 2026 04:59:14 +0000 (0:00:00.165) 0:24:47.445 ********* 2026-03-31 04:59:16.393362 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:16.393372 | orchestrator | 2026-03-31 04:59:16.393382 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-31 04:59:16.393392 | orchestrator | Tuesday 31 March 2026 04:59:14 +0000 (0:00:00.116) 0:24:47.562 ********* 2026-03-31 04:59:16.393401 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:16.393411 | orchestrator | 2026-03-31 04:59:16.393421 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-31 04:59:16.393433 | orchestrator | Tuesday 31 March 2026 04:59:15 +0000 (0:00:00.219) 0:24:47.781 ********* 2026-03-31 04:59:16.393445 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:16.393457 | orchestrator | 2026-03-31 04:59:16.393468 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-31 04:59:16.393479 | orchestrator | Tuesday 31 March 2026 04:59:15 +0000 (0:00:00.120) 0:24:47.902 ********* 2026-03-31 04:59:16.393491 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:16.393503 | orchestrator | 2026-03-31 04:59:16.393514 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-31 04:59:16.393525 | orchestrator | Tuesday 31 March 2026 04:59:15 +0000 (0:00:00.137) 0:24:48.039 ********* 2026-03-31 04:59:16.393537 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:59:16.393548 | orchestrator | 2026-03-31 04:59:16.393559 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-31 04:59:16.393570 | orchestrator | Tuesday 31 March 2026 04:59:15 +0000 (0:00:00.160) 0:24:48.200 ********* 2026-03-31 04:59:16.393581 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:16.393592 | orchestrator | 2026-03-31 04:59:16.393603 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-31 04:59:16.393614 | orchestrator | Tuesday 31 March 2026 04:59:15 +0000 (0:00:00.136) 0:24:48.336 ********* 2026-03-31 04:59:16.393625 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:59:16.393636 | orchestrator | 2026-03-31 04:59:16.393647 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-31 04:59:16.393658 | orchestrator | Tuesday 31 March 2026 04:59:15 +0000 (0:00:00.169) 0:24:48.505 ********* 2026-03-31 04:59:16.393669 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:16.393685 | orchestrator | 2026-03-31 04:59:16.393722 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-31 04:59:16.393747 | orchestrator | Tuesday 31 March 2026 04:59:15 +0000 (0:00:00.125) 0:24:48.630 ********* 2026-03-31 04:59:16.393764 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:59:16.393809 | orchestrator | 2026-03-31 04:59:16.393823 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-31 04:59:16.393838 | orchestrator | Tuesday 31 March 2026 04:59:16 +0000 (0:00:00.189) 0:24:48.820 ********* 2026-03-31 04:59:16.393856 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:59:16.393907 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--67174221--9040--517a--ae84--daf8ebd704d7-osd--block--67174221--9040--517a--ae84--daf8ebd704d7', 'dm-uuid-LVM-KejqHBdnFtLSyyC9R84nyz1yANxrpRIXzilsodjHoTjpW17LoAebYG18loNV682y'], 'uuids': ['e0243936-4e5c-4d79-8eb8-83df85650a2f'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c466d3ef', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['zilsod-jHoT-jpW1-7LoA-ebYG-18lo-NV682y']}})  2026-03-31 04:59:16.393954 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a878a648-90f8-45a8-8930-74e801ae2e4e', 'scsi-SQEMU_QEMU_HARDDISK_a878a648-90f8-45a8-8930-74e801ae2e4e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a878a648', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-31 04:59:16.393974 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-lFSq2g-b3FP-rBDh-oytj-DsQd-47zI-8ZR1ba', 'scsi-0QEMU_QEMU_HARDDISK_820fa545-b298-47e1-b072-447ef233e5c9', 'scsi-SQEMU_QEMU_HARDDISK_820fa545-b298-47e1-b072-447ef233e5c9'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '820fa545', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--dad98f55--09f4--5a2b--a5c7--aafce2660c53-osd--block--dad98f55--09f4--5a2b--a5c7--aafce2660c53']}})  2026-03-31 04:59:16.393992 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:59:16.394012 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:59:16.394099 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-31-01-38-49-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-31 04:59:16.394119 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:59:16.394130 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ttbUQt-J3i2-5YBf-d39y-c024-Mn1f-tAcrtm', 'dm-uuid-CRYPT-LUKS2-c1688bff06c1489bb542bf83ea59d0b8-ttbUQt-J3i2-5YBf-d39y-c024-Mn1f-tAcrtm'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-31 04:59:16.394164 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:59:16.719236 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--dad98f55--09f4--5a2b--a5c7--aafce2660c53-osd--block--dad98f55--09f4--5a2b--a5c7--aafce2660c53', 'dm-uuid-LVM-3PGokd0XE9nIVZhiheUbxNcBNNscsDrxttbUQtJ3i25YBfd39yc024Mn1ftAcrtm'], 'uuids': ['c1688bff-06c1-489b-b542-bf83ea59d0b8'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '820fa545', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['ttbUQt-J3i2-5YBf-d39y-c024-Mn1f-tAcrtm']}})  2026-03-31 04:59:16.719331 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ysmeMC-hqe7-I7iJ-JTkz-gYYz-B5UB-UbMPzu', 'scsi-0QEMU_QEMU_HARDDISK_c466d3ef-6614-47a1-86d1-ef83336ce84c', 'scsi-SQEMU_QEMU_HARDDISK_c466d3ef-6614-47a1-86d1-ef83336ce84c'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c466d3ef', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--67174221--9040--517a--ae84--daf8ebd704d7-osd--block--67174221--9040--517a--ae84--daf8ebd704d7']}})  2026-03-31 04:59:16.719346 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:59:16.719376 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '53e77e6d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part16', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part14', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part15', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part1', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-31 04:59:16.719422 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:59:16.719433 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 04:59:16.719443 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-zilsod-jHoT-jpW1-7LoA-ebYG-18lo-NV682y', 'dm-uuid-CRYPT-LUKS2-e02439364e5c4d798eb883df85650a2f-zilsod-jHoT-jpW1-7LoA-ebYG-18lo-NV682y'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-31 04:59:16.719453 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:16.719464 | orchestrator | 2026-03-31 04:59:16.719473 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-31 04:59:16.719482 | orchestrator | Tuesday 31 March 2026 04:59:16 +0000 (0:00:00.367) 0:24:49.188 ********* 2026-03-31 04:59:16.719492 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:59:16.719511 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--67174221--9040--517a--ae84--daf8ebd704d7-osd--block--67174221--9040--517a--ae84--daf8ebd704d7', 'dm-uuid-LVM-KejqHBdnFtLSyyC9R84nyz1yANxrpRIXzilsodjHoTjpW17LoAebYG18loNV682y'], 'uuids': ['e0243936-4e5c-4d79-8eb8-83df85650a2f'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c466d3ef', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['zilsod-jHoT-jpW1-7LoA-ebYG-18lo-NV682y']}}, 'ansible_loop_var': 'item'})  2026-03-31 04:59:16.719522 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a878a648-90f8-45a8-8930-74e801ae2e4e', 'scsi-SQEMU_QEMU_HARDDISK_a878a648-90f8-45a8-8930-74e801ae2e4e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a878a648', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:59:16.719538 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-lFSq2g-b3FP-rBDh-oytj-DsQd-47zI-8ZR1ba', 'scsi-0QEMU_QEMU_HARDDISK_820fa545-b298-47e1-b072-447ef233e5c9', 'scsi-SQEMU_QEMU_HARDDISK_820fa545-b298-47e1-b072-447ef233e5c9'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '820fa545', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--dad98f55--09f4--5a2b--a5c7--aafce2660c53-osd--block--dad98f55--09f4--5a2b--a5c7--aafce2660c53']}}, 'ansible_loop_var': 'item'})  2026-03-31 04:59:16.899367 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:59:16.899467 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:59:16.899522 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-31-01-38-49-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:59:16.899537 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:59:16.899549 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ttbUQt-J3i2-5YBf-d39y-c024-Mn1f-tAcrtm', 'dm-uuid-CRYPT-LUKS2-c1688bff06c1489bb542bf83ea59d0b8-ttbUQt-J3i2-5YBf-d39y-c024-Mn1f-tAcrtm'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:59:16.899560 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:59:16.899593 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--dad98f55--09f4--5a2b--a5c7--aafce2660c53-osd--block--dad98f55--09f4--5a2b--a5c7--aafce2660c53', 'dm-uuid-LVM-3PGokd0XE9nIVZhiheUbxNcBNNscsDrxttbUQtJ3i25YBfd39yc024Mn1ftAcrtm'], 'uuids': ['c1688bff-06c1-489b-b542-bf83ea59d0b8'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '820fa545', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['ttbUQt-J3i2-5YBf-d39y-c024-Mn1f-tAcrtm']}}, 'ansible_loop_var': 'item'})  2026-03-31 04:59:16.899612 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ysmeMC-hqe7-I7iJ-JTkz-gYYz-B5UB-UbMPzu', 'scsi-0QEMU_QEMU_HARDDISK_c466d3ef-6614-47a1-86d1-ef83336ce84c', 'scsi-SQEMU_QEMU_HARDDISK_c466d3ef-6614-47a1-86d1-ef83336ce84c'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c466d3ef', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--67174221--9040--517a--ae84--daf8ebd704d7-osd--block--67174221--9040--517a--ae84--daf8ebd704d7']}}, 'ansible_loop_var': 'item'})  2026-03-31 04:59:16.899638 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:59:16.899660 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '53e77e6d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part16', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part14', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part15', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part1', 'scsi-SQEMU_QEMU_HARDDISK_53e77e6d-528f-491f-9dcc-6d0bc8238047-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:59:25.364043 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:59:25.364201 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:59:25.364221 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-zilsod-jHoT-jpW1-7LoA-ebYG-18lo-NV682y', 'dm-uuid-CRYPT-LUKS2-e02439364e5c4d798eb883df85650a2f-zilsod-jHoT-jpW1-7LoA-ebYG-18lo-NV682y'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 04:59:25.364236 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:25.364250 | orchestrator | 2026-03-31 04:59:25.364263 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-31 04:59:25.364275 | orchestrator | Tuesday 31 March 2026 04:59:16 +0000 (0:00:00.384) 0:24:49.572 ********* 2026-03-31 04:59:25.364287 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:59:25.364299 | orchestrator | 2026-03-31 04:59:25.364311 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-31 04:59:25.364321 | orchestrator | Tuesday 31 March 2026 04:59:17 +0000 (0:00:00.768) 0:24:50.341 ********* 2026-03-31 04:59:25.364332 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:59:25.364343 | orchestrator | 2026-03-31 04:59:25.364354 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-31 04:59:25.364365 | orchestrator | Tuesday 31 March 2026 04:59:17 +0000 (0:00:00.141) 0:24:50.482 ********* 2026-03-31 04:59:25.364375 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:59:25.364386 | orchestrator | 2026-03-31 04:59:25.364397 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-31 04:59:25.364408 | orchestrator | Tuesday 31 March 2026 04:59:18 +0000 (0:00:00.479) 0:24:50.962 ********* 2026-03-31 04:59:25.364435 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:25.364447 | orchestrator | 2026-03-31 04:59:25.364458 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-31 04:59:25.364469 | orchestrator | Tuesday 31 March 2026 04:59:18 +0000 (0:00:00.140) 0:24:51.103 ********* 2026-03-31 04:59:25.364480 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:25.364491 | orchestrator | 2026-03-31 04:59:25.364502 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-31 04:59:25.364514 | orchestrator | Tuesday 31 March 2026 04:59:18 +0000 (0:00:00.244) 0:24:51.347 ********* 2026-03-31 04:59:25.364525 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:25.364536 | orchestrator | 2026-03-31 04:59:25.364546 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-31 04:59:25.364560 | orchestrator | Tuesday 31 March 2026 04:59:18 +0000 (0:00:00.162) 0:24:51.509 ********* 2026-03-31 04:59:25.364572 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-31 04:59:25.364585 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-31 04:59:25.364597 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-31 04:59:25.364617 | orchestrator | 2026-03-31 04:59:25.364631 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-31 04:59:25.364644 | orchestrator | Tuesday 31 March 2026 04:59:19 +0000 (0:00:00.787) 0:24:52.297 ********* 2026-03-31 04:59:25.364656 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-31 04:59:25.364669 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-31 04:59:25.364681 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-31 04:59:25.364694 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:25.364706 | orchestrator | 2026-03-31 04:59:25.364718 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-31 04:59:25.364731 | orchestrator | Tuesday 31 March 2026 04:59:19 +0000 (0:00:00.169) 0:24:52.466 ********* 2026-03-31 04:59:25.364763 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-03-31 04:59:25.364815 | orchestrator | 2026-03-31 04:59:25.364832 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-31 04:59:25.364846 | orchestrator | Tuesday 31 March 2026 04:59:20 +0000 (0:00:00.254) 0:24:52.721 ********* 2026-03-31 04:59:25.364859 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:25.364872 | orchestrator | 2026-03-31 04:59:25.364883 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-31 04:59:25.364894 | orchestrator | Tuesday 31 March 2026 04:59:20 +0000 (0:00:00.127) 0:24:52.849 ********* 2026-03-31 04:59:25.364905 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:25.364916 | orchestrator | 2026-03-31 04:59:25.364927 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-31 04:59:25.364938 | orchestrator | Tuesday 31 March 2026 04:59:20 +0000 (0:00:00.191) 0:24:53.041 ********* 2026-03-31 04:59:25.364949 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:25.364960 | orchestrator | 2026-03-31 04:59:25.364971 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-31 04:59:25.364982 | orchestrator | Tuesday 31 March 2026 04:59:20 +0000 (0:00:00.170) 0:24:53.211 ********* 2026-03-31 04:59:25.364993 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:59:25.365004 | orchestrator | 2026-03-31 04:59:25.365021 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-31 04:59:25.365032 | orchestrator | Tuesday 31 March 2026 04:59:21 +0000 (0:00:00.560) 0:24:53.771 ********* 2026-03-31 04:59:25.365043 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-31 04:59:25.365054 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-31 04:59:25.365065 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-31 04:59:25.365075 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:25.365086 | orchestrator | 2026-03-31 04:59:25.365097 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-31 04:59:25.365108 | orchestrator | Tuesday 31 March 2026 04:59:21 +0000 (0:00:00.392) 0:24:54.164 ********* 2026-03-31 04:59:25.365119 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-31 04:59:25.365130 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-31 04:59:25.365141 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-31 04:59:25.365152 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:25.365163 | orchestrator | 2026-03-31 04:59:25.365174 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-31 04:59:25.365185 | orchestrator | Tuesday 31 March 2026 04:59:21 +0000 (0:00:00.395) 0:24:54.559 ********* 2026-03-31 04:59:25.365195 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-31 04:59:25.365206 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-31 04:59:25.365217 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-31 04:59:25.365228 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:25.365246 | orchestrator | 2026-03-31 04:59:25.365258 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-31 04:59:25.365269 | orchestrator | Tuesday 31 March 2026 04:59:22 +0000 (0:00:00.411) 0:24:54.971 ********* 2026-03-31 04:59:25.365280 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:59:25.365290 | orchestrator | 2026-03-31 04:59:25.365301 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-31 04:59:25.365312 | orchestrator | Tuesday 31 March 2026 04:59:22 +0000 (0:00:00.165) 0:24:55.136 ********* 2026-03-31 04:59:25.365323 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-31 04:59:25.365334 | orchestrator | 2026-03-31 04:59:25.365345 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-31 04:59:25.365356 | orchestrator | Tuesday 31 March 2026 04:59:22 +0000 (0:00:00.376) 0:24:55.512 ********* 2026-03-31 04:59:25.365367 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:59:25.365377 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:59:25.365388 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:59:25.365399 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-31 04:59:25.365410 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-31 04:59:25.365421 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-31 04:59:25.365432 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-31 04:59:25.365443 | orchestrator | 2026-03-31 04:59:25.365454 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-31 04:59:25.365464 | orchestrator | Tuesday 31 March 2026 04:59:23 +0000 (0:00:00.815) 0:24:56.328 ********* 2026-03-31 04:59:25.365475 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 04:59:25.365486 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 04:59:25.365497 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 04:59:25.365508 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-31 04:59:25.365519 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-31 04:59:25.365530 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-31 04:59:25.365541 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-31 04:59:25.365552 | orchestrator | 2026-03-31 04:59:25.365569 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-03-31 04:59:40.606995 | orchestrator | Tuesday 31 March 2026 04:59:25 +0000 (0:00:01.702) 0:24:58.030 ********* 2026-03-31 04:59:40.607115 | orchestrator | changed: [testbed-node-3] 2026-03-31 04:59:40.607133 | orchestrator | 2026-03-31 04:59:40.607147 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-03-31 04:59:40.607159 | orchestrator | Tuesday 31 March 2026 04:59:27 +0000 (0:00:02.260) 0:25:00.291 ********* 2026-03-31 04:59:40.607170 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-31 04:59:40.607183 | orchestrator | 2026-03-31 04:59:40.607194 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-03-31 04:59:40.607206 | orchestrator | Tuesday 31 March 2026 04:59:29 +0000 (0:00:01.793) 0:25:02.085 ********* 2026-03-31 04:59:40.607217 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-31 04:59:40.607228 | orchestrator | 2026-03-31 04:59:40.607239 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-31 04:59:40.607250 | orchestrator | Tuesday 31 March 2026 04:59:30 +0000 (0:00:01.253) 0:25:03.339 ********* 2026-03-31 04:59:40.607299 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-03-31 04:59:40.607312 | orchestrator | 2026-03-31 04:59:40.607323 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-31 04:59:40.607334 | orchestrator | Tuesday 31 March 2026 04:59:31 +0000 (0:00:00.483) 0:25:03.822 ********* 2026-03-31 04:59:40.607345 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-03-31 04:59:40.607356 | orchestrator | 2026-03-31 04:59:40.607368 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-31 04:59:40.607379 | orchestrator | Tuesday 31 March 2026 04:59:31 +0000 (0:00:00.218) 0:25:04.041 ********* 2026-03-31 04:59:40.607390 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:40.607401 | orchestrator | 2026-03-31 04:59:40.607412 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-31 04:59:40.607423 | orchestrator | Tuesday 31 March 2026 04:59:31 +0000 (0:00:00.130) 0:25:04.172 ********* 2026-03-31 04:59:40.607434 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:59:40.607446 | orchestrator | 2026-03-31 04:59:40.607458 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-31 04:59:40.607469 | orchestrator | Tuesday 31 March 2026 04:59:32 +0000 (0:00:00.509) 0:25:04.681 ********* 2026-03-31 04:59:40.607480 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:59:40.607492 | orchestrator | 2026-03-31 04:59:40.607503 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-31 04:59:40.607514 | orchestrator | Tuesday 31 March 2026 04:59:32 +0000 (0:00:00.537) 0:25:05.219 ********* 2026-03-31 04:59:40.607525 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:59:40.607538 | orchestrator | 2026-03-31 04:59:40.607550 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-31 04:59:40.607563 | orchestrator | Tuesday 31 March 2026 04:59:33 +0000 (0:00:00.517) 0:25:05.737 ********* 2026-03-31 04:59:40.607576 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:40.607588 | orchestrator | 2026-03-31 04:59:40.607600 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-31 04:59:40.607613 | orchestrator | Tuesday 31 March 2026 04:59:33 +0000 (0:00:00.127) 0:25:05.864 ********* 2026-03-31 04:59:40.607626 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:40.607638 | orchestrator | 2026-03-31 04:59:40.607651 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-31 04:59:40.607664 | orchestrator | Tuesday 31 March 2026 04:59:33 +0000 (0:00:00.128) 0:25:05.993 ********* 2026-03-31 04:59:40.607677 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:40.607688 | orchestrator | 2026-03-31 04:59:40.607700 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-31 04:59:40.607711 | orchestrator | Tuesday 31 March 2026 04:59:33 +0000 (0:00:00.135) 0:25:06.129 ********* 2026-03-31 04:59:40.607722 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:59:40.607733 | orchestrator | 2026-03-31 04:59:40.607744 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-31 04:59:40.607755 | orchestrator | Tuesday 31 March 2026 04:59:33 +0000 (0:00:00.537) 0:25:06.666 ********* 2026-03-31 04:59:40.607766 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:59:40.607801 | orchestrator | 2026-03-31 04:59:40.607813 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-31 04:59:40.607823 | orchestrator | Tuesday 31 March 2026 04:59:34 +0000 (0:00:00.543) 0:25:07.209 ********* 2026-03-31 04:59:40.607834 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:40.607846 | orchestrator | 2026-03-31 04:59:40.607857 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-31 04:59:40.607868 | orchestrator | Tuesday 31 March 2026 04:59:34 +0000 (0:00:00.145) 0:25:07.355 ********* 2026-03-31 04:59:40.607879 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:40.607890 | orchestrator | 2026-03-31 04:59:40.607910 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-31 04:59:40.607921 | orchestrator | Tuesday 31 March 2026 04:59:34 +0000 (0:00:00.144) 0:25:07.500 ********* 2026-03-31 04:59:40.607932 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:59:40.607943 | orchestrator | 2026-03-31 04:59:40.607954 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-31 04:59:40.607965 | orchestrator | Tuesday 31 March 2026 04:59:35 +0000 (0:00:00.479) 0:25:07.979 ********* 2026-03-31 04:59:40.607976 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:59:40.607987 | orchestrator | 2026-03-31 04:59:40.607998 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-31 04:59:40.608009 | orchestrator | Tuesday 31 March 2026 04:59:35 +0000 (0:00:00.163) 0:25:08.142 ********* 2026-03-31 04:59:40.608020 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:59:40.608031 | orchestrator | 2026-03-31 04:59:40.608060 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-31 04:59:40.608072 | orchestrator | Tuesday 31 March 2026 04:59:35 +0000 (0:00:00.145) 0:25:08.288 ********* 2026-03-31 04:59:40.608083 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:40.608095 | orchestrator | 2026-03-31 04:59:40.608106 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-31 04:59:40.608117 | orchestrator | Tuesday 31 March 2026 04:59:35 +0000 (0:00:00.130) 0:25:08.418 ********* 2026-03-31 04:59:40.608128 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:40.608139 | orchestrator | 2026-03-31 04:59:40.608151 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-31 04:59:40.608162 | orchestrator | Tuesday 31 March 2026 04:59:35 +0000 (0:00:00.136) 0:25:08.555 ********* 2026-03-31 04:59:40.608173 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:40.608184 | orchestrator | 2026-03-31 04:59:40.608195 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-31 04:59:40.608206 | orchestrator | Tuesday 31 March 2026 04:59:35 +0000 (0:00:00.116) 0:25:08.672 ********* 2026-03-31 04:59:40.608217 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:59:40.608228 | orchestrator | 2026-03-31 04:59:40.608239 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-31 04:59:40.608250 | orchestrator | Tuesday 31 March 2026 04:59:36 +0000 (0:00:00.143) 0:25:08.815 ********* 2026-03-31 04:59:40.608261 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:59:40.608272 | orchestrator | 2026-03-31 04:59:40.608289 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-31 04:59:40.608301 | orchestrator | Tuesday 31 March 2026 04:59:36 +0000 (0:00:00.230) 0:25:09.045 ********* 2026-03-31 04:59:40.608312 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:40.608323 | orchestrator | 2026-03-31 04:59:40.608334 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-31 04:59:40.608345 | orchestrator | Tuesday 31 March 2026 04:59:36 +0000 (0:00:00.132) 0:25:09.177 ********* 2026-03-31 04:59:40.608356 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:40.608367 | orchestrator | 2026-03-31 04:59:40.608378 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-31 04:59:40.608389 | orchestrator | Tuesday 31 March 2026 04:59:36 +0000 (0:00:00.127) 0:25:09.305 ********* 2026-03-31 04:59:40.608400 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:40.608411 | orchestrator | 2026-03-31 04:59:40.608422 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-31 04:59:40.608433 | orchestrator | Tuesday 31 March 2026 04:59:36 +0000 (0:00:00.115) 0:25:09.421 ********* 2026-03-31 04:59:40.608444 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:40.608482 | orchestrator | 2026-03-31 04:59:40.608494 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-31 04:59:40.608504 | orchestrator | Tuesday 31 March 2026 04:59:36 +0000 (0:00:00.127) 0:25:09.549 ********* 2026-03-31 04:59:40.608516 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:40.608527 | orchestrator | 2026-03-31 04:59:40.608538 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-31 04:59:40.608557 | orchestrator | Tuesday 31 March 2026 04:59:37 +0000 (0:00:00.456) 0:25:10.005 ********* 2026-03-31 04:59:40.608568 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:40.608579 | orchestrator | 2026-03-31 04:59:40.608590 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-31 04:59:40.608600 | orchestrator | Tuesday 31 March 2026 04:59:37 +0000 (0:00:00.154) 0:25:10.159 ********* 2026-03-31 04:59:40.608611 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:40.608622 | orchestrator | 2026-03-31 04:59:40.608633 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-31 04:59:40.608645 | orchestrator | Tuesday 31 March 2026 04:59:37 +0000 (0:00:00.133) 0:25:10.292 ********* 2026-03-31 04:59:40.608656 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:40.608667 | orchestrator | 2026-03-31 04:59:40.608678 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-31 04:59:40.608689 | orchestrator | Tuesday 31 March 2026 04:59:37 +0000 (0:00:00.131) 0:25:10.424 ********* 2026-03-31 04:59:40.608700 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:40.608711 | orchestrator | 2026-03-31 04:59:40.608722 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-31 04:59:40.608733 | orchestrator | Tuesday 31 March 2026 04:59:37 +0000 (0:00:00.130) 0:25:10.555 ********* 2026-03-31 04:59:40.608743 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:40.608754 | orchestrator | 2026-03-31 04:59:40.608765 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-31 04:59:40.608800 | orchestrator | Tuesday 31 March 2026 04:59:38 +0000 (0:00:00.135) 0:25:10.690 ********* 2026-03-31 04:59:40.608811 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:40.608822 | orchestrator | 2026-03-31 04:59:40.608832 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-31 04:59:40.608844 | orchestrator | Tuesday 31 March 2026 04:59:38 +0000 (0:00:00.127) 0:25:10.818 ********* 2026-03-31 04:59:40.608854 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:40.608865 | orchestrator | 2026-03-31 04:59:40.608876 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-31 04:59:40.608887 | orchestrator | Tuesday 31 March 2026 04:59:38 +0000 (0:00:00.199) 0:25:11.017 ********* 2026-03-31 04:59:40.608898 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:59:40.608909 | orchestrator | 2026-03-31 04:59:40.608920 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-31 04:59:40.608930 | orchestrator | Tuesday 31 March 2026 04:59:39 +0000 (0:00:00.897) 0:25:11.915 ********* 2026-03-31 04:59:40.608941 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:59:40.608952 | orchestrator | 2026-03-31 04:59:40.608963 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-31 04:59:40.608974 | orchestrator | Tuesday 31 March 2026 04:59:40 +0000 (0:00:01.158) 0:25:13.073 ********* 2026-03-31 04:59:40.608984 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-03-31 04:59:40.608995 | orchestrator | 2026-03-31 04:59:40.609007 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-31 04:59:40.609025 | orchestrator | Tuesday 31 March 2026 04:59:40 +0000 (0:00:00.200) 0:25:13.273 ********* 2026-03-31 04:59:56.013358 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:56.013475 | orchestrator | 2026-03-31 04:59:56.013492 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-31 04:59:56.013505 | orchestrator | Tuesday 31 March 2026 04:59:41 +0000 (0:00:00.443) 0:25:13.717 ********* 2026-03-31 04:59:56.013517 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:56.013528 | orchestrator | 2026-03-31 04:59:56.013540 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-31 04:59:56.013551 | orchestrator | Tuesday 31 March 2026 04:59:41 +0000 (0:00:00.134) 0:25:13.851 ********* 2026-03-31 04:59:56.013563 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-31 04:59:56.013599 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-31 04:59:56.013612 | orchestrator | 2026-03-31 04:59:56.013622 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-31 04:59:56.013633 | orchestrator | Tuesday 31 March 2026 04:59:41 +0000 (0:00:00.795) 0:25:14.647 ********* 2026-03-31 04:59:56.013644 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:59:56.013656 | orchestrator | 2026-03-31 04:59:56.013667 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-31 04:59:56.013691 | orchestrator | Tuesday 31 March 2026 04:59:42 +0000 (0:00:00.480) 0:25:15.128 ********* 2026-03-31 04:59:56.013703 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:56.013714 | orchestrator | 2026-03-31 04:59:56.013725 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-31 04:59:56.013736 | orchestrator | Tuesday 31 March 2026 04:59:42 +0000 (0:00:00.164) 0:25:15.292 ********* 2026-03-31 04:59:56.013797 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:56.013810 | orchestrator | 2026-03-31 04:59:56.013821 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-31 04:59:56.013832 | orchestrator | Tuesday 31 March 2026 04:59:42 +0000 (0:00:00.170) 0:25:15.463 ********* 2026-03-31 04:59:56.013843 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:56.013854 | orchestrator | 2026-03-31 04:59:56.013865 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-31 04:59:56.013876 | orchestrator | Tuesday 31 March 2026 04:59:42 +0000 (0:00:00.117) 0:25:15.580 ********* 2026-03-31 04:59:56.013886 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-03-31 04:59:56.013898 | orchestrator | 2026-03-31 04:59:56.013912 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-31 04:59:56.013925 | orchestrator | Tuesday 31 March 2026 04:59:43 +0000 (0:00:00.216) 0:25:15.797 ********* 2026-03-31 04:59:56.013937 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:59:56.013950 | orchestrator | 2026-03-31 04:59:56.013962 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-31 04:59:56.013975 | orchestrator | Tuesday 31 March 2026 04:59:43 +0000 (0:00:00.675) 0:25:16.472 ********* 2026-03-31 04:59:56.013987 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-31 04:59:56.013999 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-31 04:59:56.014011 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-31 04:59:56.014080 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:56.014092 | orchestrator | 2026-03-31 04:59:56.014105 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-31 04:59:56.014118 | orchestrator | Tuesday 31 March 2026 04:59:43 +0000 (0:00:00.165) 0:25:16.638 ********* 2026-03-31 04:59:56.014130 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:56.014214 | orchestrator | 2026-03-31 04:59:56.014228 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-31 04:59:56.014241 | orchestrator | Tuesday 31 March 2026 04:59:44 +0000 (0:00:00.143) 0:25:16.781 ********* 2026-03-31 04:59:56.014253 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:56.014265 | orchestrator | 2026-03-31 04:59:56.014275 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-31 04:59:56.014286 | orchestrator | Tuesday 31 March 2026 04:59:44 +0000 (0:00:00.175) 0:25:16.957 ********* 2026-03-31 04:59:56.014296 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:56.014307 | orchestrator | 2026-03-31 04:59:56.014333 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-31 04:59:56.014345 | orchestrator | Tuesday 31 March 2026 04:59:44 +0000 (0:00:00.453) 0:25:17.410 ********* 2026-03-31 04:59:56.014356 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:56.014388 | orchestrator | 2026-03-31 04:59:56.014399 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-31 04:59:56.014410 | orchestrator | Tuesday 31 March 2026 04:59:44 +0000 (0:00:00.148) 0:25:17.559 ********* 2026-03-31 04:59:56.014421 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:56.014432 | orchestrator | 2026-03-31 04:59:56.014443 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-31 04:59:56.014453 | orchestrator | Tuesday 31 March 2026 04:59:45 +0000 (0:00:00.166) 0:25:17.726 ********* 2026-03-31 04:59:56.014464 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:59:56.014508 | orchestrator | 2026-03-31 04:59:56.014520 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-31 04:59:56.014531 | orchestrator | Tuesday 31 March 2026 04:59:46 +0000 (0:00:01.550) 0:25:19.277 ********* 2026-03-31 04:59:56.014542 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:59:56.014553 | orchestrator | 2026-03-31 04:59:56.014563 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-31 04:59:56.014574 | orchestrator | Tuesday 31 March 2026 04:59:46 +0000 (0:00:00.145) 0:25:19.422 ********* 2026-03-31 04:59:56.014585 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-03-31 04:59:56.014595 | orchestrator | 2026-03-31 04:59:56.014606 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-31 04:59:56.014638 | orchestrator | Tuesday 31 March 2026 04:59:46 +0000 (0:00:00.224) 0:25:19.646 ********* 2026-03-31 04:59:56.014650 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:56.014661 | orchestrator | 2026-03-31 04:59:56.014672 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-31 04:59:56.014683 | orchestrator | Tuesday 31 March 2026 04:59:47 +0000 (0:00:00.157) 0:25:19.803 ********* 2026-03-31 04:59:56.014694 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:56.014704 | orchestrator | 2026-03-31 04:59:56.014715 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-31 04:59:56.014785 | orchestrator | Tuesday 31 March 2026 04:59:47 +0000 (0:00:00.155) 0:25:19.959 ********* 2026-03-31 04:59:56.014802 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:56.014813 | orchestrator | 2026-03-31 04:59:56.014824 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-31 04:59:56.014834 | orchestrator | Tuesday 31 March 2026 04:59:47 +0000 (0:00:00.155) 0:25:20.114 ********* 2026-03-31 04:59:56.014845 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:56.014856 | orchestrator | 2026-03-31 04:59:56.014867 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-31 04:59:56.014878 | orchestrator | Tuesday 31 March 2026 04:59:47 +0000 (0:00:00.166) 0:25:20.281 ********* 2026-03-31 04:59:56.014889 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:56.014899 | orchestrator | 2026-03-31 04:59:56.014917 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-31 04:59:56.014929 | orchestrator | Tuesday 31 March 2026 04:59:47 +0000 (0:00:00.152) 0:25:20.433 ********* 2026-03-31 04:59:56.014939 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:56.014950 | orchestrator | 2026-03-31 04:59:56.014961 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-31 04:59:56.014972 | orchestrator | Tuesday 31 March 2026 04:59:47 +0000 (0:00:00.152) 0:25:20.585 ********* 2026-03-31 04:59:56.014983 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:56.014994 | orchestrator | 2026-03-31 04:59:56.015004 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-31 04:59:56.015015 | orchestrator | Tuesday 31 March 2026 04:59:48 +0000 (0:00:00.144) 0:25:20.730 ********* 2026-03-31 04:59:56.015026 | orchestrator | skipping: [testbed-node-3] 2026-03-31 04:59:56.015037 | orchestrator | 2026-03-31 04:59:56.015048 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-31 04:59:56.015058 | orchestrator | Tuesday 31 March 2026 04:59:48 +0000 (0:00:00.428) 0:25:21.158 ********* 2026-03-31 04:59:56.015078 | orchestrator | ok: [testbed-node-3] 2026-03-31 04:59:56.015089 | orchestrator | 2026-03-31 04:59:56.015100 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-31 04:59:56.015111 | orchestrator | Tuesday 31 March 2026 04:59:48 +0000 (0:00:00.236) 0:25:21.394 ********* 2026-03-31 04:59:56.015122 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-03-31 04:59:56.015133 | orchestrator | 2026-03-31 04:59:56.015144 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-31 04:59:56.015155 | orchestrator | Tuesday 31 March 2026 04:59:48 +0000 (0:00:00.210) 0:25:21.605 ********* 2026-03-31 04:59:56.015165 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-03-31 04:59:56.015177 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-31 04:59:56.015187 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-31 04:59:56.015198 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-31 04:59:56.015209 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-31 04:59:56.015220 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-31 04:59:56.015231 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-31 04:59:56.015242 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-31 04:59:56.015253 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-31 04:59:56.015264 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-31 04:59:56.015275 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-31 04:59:56.015286 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-31 04:59:56.015538 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-31 04:59:56.015553 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-31 04:59:56.015564 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-03-31 04:59:56.015606 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-03-31 04:59:56.015631 | orchestrator | 2026-03-31 04:59:56.015643 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-31 04:59:56.015665 | orchestrator | Tuesday 31 March 2026 04:59:54 +0000 (0:00:05.391) 0:25:26.997 ********* 2026-03-31 04:59:56.015676 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-03-31 04:59:56.015687 | orchestrator | 2026-03-31 04:59:56.015698 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-31 04:59:56.015709 | orchestrator | Tuesday 31 March 2026 04:59:54 +0000 (0:00:00.202) 0:25:27.200 ********* 2026-03-31 04:59:56.015720 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-31 04:59:56.015732 | orchestrator | 2026-03-31 04:59:56.015743 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-31 04:59:56.015800 | orchestrator | Tuesday 31 March 2026 04:59:55 +0000 (0:00:00.513) 0:25:27.713 ********* 2026-03-31 04:59:56.015812 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-31 04:59:56.015823 | orchestrator | 2026-03-31 04:59:56.015834 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-31 04:59:56.015855 | orchestrator | Tuesday 31 March 2026 04:59:55 +0000 (0:00:00.963) 0:25:28.677 ********* 2026-03-31 05:00:13.339253 | orchestrator | skipping: [testbed-node-3] 2026-03-31 05:00:13.339351 | orchestrator | 2026-03-31 05:00:13.339368 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-31 05:00:13.339380 | orchestrator | Tuesday 31 March 2026 04:59:56 +0000 (0:00:00.146) 0:25:28.823 ********* 2026-03-31 05:00:13.339392 | orchestrator | skipping: [testbed-node-3] 2026-03-31 05:00:13.339403 | orchestrator | 2026-03-31 05:00:13.339415 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-31 05:00:13.339447 | orchestrator | Tuesday 31 March 2026 04:59:56 +0000 (0:00:00.138) 0:25:28.962 ********* 2026-03-31 05:00:13.339459 | orchestrator | skipping: [testbed-node-3] 2026-03-31 05:00:13.339470 | orchestrator | 2026-03-31 05:00:13.339481 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-31 05:00:13.339492 | orchestrator | Tuesday 31 March 2026 04:59:56 +0000 (0:00:00.129) 0:25:29.091 ********* 2026-03-31 05:00:13.339503 | orchestrator | skipping: [testbed-node-3] 2026-03-31 05:00:13.339514 | orchestrator | 2026-03-31 05:00:13.339525 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-31 05:00:13.339536 | orchestrator | Tuesday 31 March 2026 04:59:56 +0000 (0:00:00.427) 0:25:29.519 ********* 2026-03-31 05:00:13.339547 | orchestrator | skipping: [testbed-node-3] 2026-03-31 05:00:13.339558 | orchestrator | 2026-03-31 05:00:13.339569 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-31 05:00:13.339582 | orchestrator | Tuesday 31 March 2026 04:59:56 +0000 (0:00:00.136) 0:25:29.655 ********* 2026-03-31 05:00:13.339593 | orchestrator | skipping: [testbed-node-3] 2026-03-31 05:00:13.339604 | orchestrator | 2026-03-31 05:00:13.339615 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-31 05:00:13.339626 | orchestrator | Tuesday 31 March 2026 04:59:57 +0000 (0:00:00.137) 0:25:29.793 ********* 2026-03-31 05:00:13.339637 | orchestrator | skipping: [testbed-node-3] 2026-03-31 05:00:13.339648 | orchestrator | 2026-03-31 05:00:13.339660 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-31 05:00:13.339671 | orchestrator | Tuesday 31 March 2026 04:59:57 +0000 (0:00:00.149) 0:25:29.943 ********* 2026-03-31 05:00:13.339682 | orchestrator | skipping: [testbed-node-3] 2026-03-31 05:00:13.339693 | orchestrator | 2026-03-31 05:00:13.339705 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-31 05:00:13.339716 | orchestrator | Tuesday 31 March 2026 04:59:57 +0000 (0:00:00.143) 0:25:30.086 ********* 2026-03-31 05:00:13.339755 | orchestrator | skipping: [testbed-node-3] 2026-03-31 05:00:13.339766 | orchestrator | 2026-03-31 05:00:13.339778 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-31 05:00:13.339791 | orchestrator | Tuesday 31 March 2026 04:59:57 +0000 (0:00:00.129) 0:25:30.216 ********* 2026-03-31 05:00:13.339804 | orchestrator | skipping: [testbed-node-3] 2026-03-31 05:00:13.339817 | orchestrator | 2026-03-31 05:00:13.339829 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-31 05:00:13.339843 | orchestrator | Tuesday 31 March 2026 04:59:57 +0000 (0:00:00.127) 0:25:30.344 ********* 2026-03-31 05:00:13.339856 | orchestrator | skipping: [testbed-node-3] 2026-03-31 05:00:13.339868 | orchestrator | 2026-03-31 05:00:13.339879 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-31 05:00:13.339890 | orchestrator | Tuesday 31 March 2026 04:59:57 +0000 (0:00:00.161) 0:25:30.506 ********* 2026-03-31 05:00:13.339901 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-31 05:00:13.339912 | orchestrator | 2026-03-31 05:00:13.339922 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-31 05:00:13.339933 | orchestrator | Tuesday 31 March 2026 05:00:01 +0000 (0:00:03.367) 0:25:33.873 ********* 2026-03-31 05:00:13.339945 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-31 05:00:13.339956 | orchestrator | 2026-03-31 05:00:13.339967 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-31 05:00:13.339978 | orchestrator | Tuesday 31 March 2026 05:00:01 +0000 (0:00:00.170) 0:25:34.044 ********* 2026-03-31 05:00:13.339992 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-03-31 05:00:13.340014 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-03-31 05:00:13.340026 | orchestrator | 2026-03-31 05:00:13.340037 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-31 05:00:13.340048 | orchestrator | Tuesday 31 March 2026 05:00:05 +0000 (0:00:03.907) 0:25:37.951 ********* 2026-03-31 05:00:13.340059 | orchestrator | skipping: [testbed-node-3] 2026-03-31 05:00:13.340070 | orchestrator | 2026-03-31 05:00:13.340081 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-31 05:00:13.340092 | orchestrator | Tuesday 31 March 2026 05:00:05 +0000 (0:00:00.156) 0:25:38.108 ********* 2026-03-31 05:00:13.340103 | orchestrator | skipping: [testbed-node-3] 2026-03-31 05:00:13.340114 | orchestrator | 2026-03-31 05:00:13.340125 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-31 05:00:13.340152 | orchestrator | Tuesday 31 March 2026 05:00:05 +0000 (0:00:00.138) 0:25:38.246 ********* 2026-03-31 05:00:13.340164 | orchestrator | skipping: [testbed-node-3] 2026-03-31 05:00:13.340175 | orchestrator | 2026-03-31 05:00:13.340186 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-31 05:00:13.340196 | orchestrator | Tuesday 31 March 2026 05:00:06 +0000 (0:00:00.478) 0:25:38.725 ********* 2026-03-31 05:00:13.340207 | orchestrator | skipping: [testbed-node-3] 2026-03-31 05:00:13.340218 | orchestrator | 2026-03-31 05:00:13.340270 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-31 05:00:13.340283 | orchestrator | Tuesday 31 March 2026 05:00:06 +0000 (0:00:00.173) 0:25:38.899 ********* 2026-03-31 05:00:13.340294 | orchestrator | skipping: [testbed-node-3] 2026-03-31 05:00:13.340305 | orchestrator | 2026-03-31 05:00:13.340316 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-31 05:00:13.340327 | orchestrator | Tuesday 31 March 2026 05:00:06 +0000 (0:00:00.167) 0:25:39.067 ********* 2026-03-31 05:00:13.340338 | orchestrator | ok: [testbed-node-3] 2026-03-31 05:00:13.340349 | orchestrator | 2026-03-31 05:00:13.340360 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-31 05:00:13.340375 | orchestrator | Tuesday 31 March 2026 05:00:06 +0000 (0:00:00.219) 0:25:39.286 ********* 2026-03-31 05:00:13.340386 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-31 05:00:13.340397 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-31 05:00:13.340408 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-31 05:00:13.340419 | orchestrator | skipping: [testbed-node-3] 2026-03-31 05:00:13.340429 | orchestrator | 2026-03-31 05:00:13.340440 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-31 05:00:13.340451 | orchestrator | Tuesday 31 March 2026 05:00:06 +0000 (0:00:00.376) 0:25:39.663 ********* 2026-03-31 05:00:13.340462 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-31 05:00:13.340473 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-31 05:00:13.340484 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-31 05:00:13.340494 | orchestrator | skipping: [testbed-node-3] 2026-03-31 05:00:13.340505 | orchestrator | 2026-03-31 05:00:13.340516 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-31 05:00:13.340527 | orchestrator | Tuesday 31 March 2026 05:00:07 +0000 (0:00:00.379) 0:25:40.043 ********* 2026-03-31 05:00:13.340537 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-31 05:00:13.340548 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-31 05:00:13.340566 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-31 05:00:13.340577 | orchestrator | skipping: [testbed-node-3] 2026-03-31 05:00:13.340588 | orchestrator | 2026-03-31 05:00:13.340599 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-31 05:00:13.340610 | orchestrator | Tuesday 31 March 2026 05:00:07 +0000 (0:00:00.386) 0:25:40.429 ********* 2026-03-31 05:00:13.340621 | orchestrator | ok: [testbed-node-3] 2026-03-31 05:00:13.340632 | orchestrator | 2026-03-31 05:00:13.340642 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-31 05:00:13.340654 | orchestrator | Tuesday 31 March 2026 05:00:07 +0000 (0:00:00.151) 0:25:40.580 ********* 2026-03-31 05:00:13.340664 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-31 05:00:13.340675 | orchestrator | 2026-03-31 05:00:13.340686 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-31 05:00:13.340697 | orchestrator | Tuesday 31 March 2026 05:00:08 +0000 (0:00:00.375) 0:25:40.956 ********* 2026-03-31 05:00:13.340708 | orchestrator | ok: [testbed-node-3] 2026-03-31 05:00:13.340739 | orchestrator | 2026-03-31 05:00:13.340752 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-31 05:00:13.340763 | orchestrator | Tuesday 31 March 2026 05:00:09 +0000 (0:00:00.759) 0:25:41.716 ********* 2026-03-31 05:00:13.340774 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3 2026-03-31 05:00:13.340785 | orchestrator | 2026-03-31 05:00:13.340796 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-31 05:00:13.340807 | orchestrator | Tuesday 31 March 2026 05:00:09 +0000 (0:00:00.391) 0:25:42.107 ********* 2026-03-31 05:00:13.340818 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 05:00:13.340829 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-31 05:00:13.340840 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-31 05:00:13.340851 | orchestrator | 2026-03-31 05:00:13.340862 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-31 05:00:13.340872 | orchestrator | Tuesday 31 March 2026 05:00:11 +0000 (0:00:02.123) 0:25:44.230 ********* 2026-03-31 05:00:13.340884 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-03-31 05:00:13.340895 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-31 05:00:13.340906 | orchestrator | ok: [testbed-node-3] 2026-03-31 05:00:13.340917 | orchestrator | 2026-03-31 05:00:13.340928 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-31 05:00:13.340939 | orchestrator | Tuesday 31 March 2026 05:00:12 +0000 (0:00:00.915) 0:25:45.146 ********* 2026-03-31 05:00:13.340950 | orchestrator | skipping: [testbed-node-3] 2026-03-31 05:00:13.340961 | orchestrator | 2026-03-31 05:00:13.340972 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-31 05:00:13.340983 | orchestrator | Tuesday 31 March 2026 05:00:12 +0000 (0:00:00.093) 0:25:45.239 ********* 2026-03-31 05:00:13.340994 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3 2026-03-31 05:00:13.341005 | orchestrator | 2026-03-31 05:00:13.341016 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-31 05:00:13.341027 | orchestrator | Tuesday 31 March 2026 05:00:12 +0000 (0:00:00.177) 0:25:45.417 ********* 2026-03-31 05:00:13.341046 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-31 05:01:01.011873 | orchestrator | 2026-03-31 05:01:01.012099 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-31 05:01:01.012132 | orchestrator | Tuesday 31 March 2026 05:00:13 +0000 (0:00:00.592) 0:25:46.010 ********* 2026-03-31 05:01:01.012145 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 05:01:01.012158 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-31 05:01:01.012199 | orchestrator | 2026-03-31 05:01:01.012214 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-31 05:01:01.012229 | orchestrator | Tuesday 31 March 2026 05:00:17 +0000 (0:00:03.955) 0:25:49.965 ********* 2026-03-31 05:01:01.012244 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 05:01:01.012259 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-31 05:01:01.012272 | orchestrator | 2026-03-31 05:01:01.012302 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-31 05:01:01.012315 | orchestrator | Tuesday 31 March 2026 05:00:19 +0000 (0:00:01.984) 0:25:51.950 ********* 2026-03-31 05:01:01.012329 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-03-31 05:01:01.012343 | orchestrator | ok: [testbed-node-3] 2026-03-31 05:01:01.012357 | orchestrator | 2026-03-31 05:01:01.012371 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-31 05:01:01.012385 | orchestrator | Tuesday 31 March 2026 05:00:20 +0000 (0:00:00.977) 0:25:52.928 ********* 2026-03-31 05:01:01.012398 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-03-31 05:01:01.012411 | orchestrator | 2026-03-31 05:01:01.012424 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-31 05:01:01.012437 | orchestrator | Tuesday 31 March 2026 05:00:20 +0000 (0:00:00.226) 0:25:53.154 ********* 2026-03-31 05:01:01.012450 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 05:01:01.012464 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 05:01:01.012478 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 05:01:01.012491 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 05:01:01.012504 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 05:01:01.012518 | orchestrator | skipping: [testbed-node-3] 2026-03-31 05:01:01.012532 | orchestrator | 2026-03-31 05:01:01.012545 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-31 05:01:01.012558 | orchestrator | Tuesday 31 March 2026 05:00:21 +0000 (0:00:00.908) 0:25:54.063 ********* 2026-03-31 05:01:01.012570 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 05:01:01.012581 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 05:01:01.012592 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 05:01:01.012603 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 05:01:01.012614 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 05:01:01.012625 | orchestrator | skipping: [testbed-node-3] 2026-03-31 05:01:01.012636 | orchestrator | 2026-03-31 05:01:01.012647 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-31 05:01:01.012683 | orchestrator | Tuesday 31 March 2026 05:00:22 +0000 (0:00:01.212) 0:25:55.276 ********* 2026-03-31 05:01:01.012695 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-31 05:01:01.012708 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-31 05:01:01.012728 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-31 05:01:01.012740 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-31 05:01:01.012751 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-31 05:01:01.012763 | orchestrator | 2026-03-31 05:01:01.012774 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-31 05:01:01.012834 | orchestrator | Tuesday 31 March 2026 05:00:51 +0000 (0:00:28.995) 0:26:24.271 ********* 2026-03-31 05:01:01.012847 | orchestrator | skipping: [testbed-node-3] 2026-03-31 05:01:01.012859 | orchestrator | 2026-03-31 05:01:01.012870 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-31 05:01:01.012881 | orchestrator | Tuesday 31 March 2026 05:00:51 +0000 (0:00:00.127) 0:26:24.399 ********* 2026-03-31 05:01:01.012892 | orchestrator | skipping: [testbed-node-3] 2026-03-31 05:01:01.012903 | orchestrator | 2026-03-31 05:01:01.012914 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-31 05:01:01.012925 | orchestrator | Tuesday 31 March 2026 05:00:51 +0000 (0:00:00.127) 0:26:24.527 ********* 2026-03-31 05:01:01.012936 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3 2026-03-31 05:01:01.012947 | orchestrator | 2026-03-31 05:01:01.012958 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-31 05:01:01.012969 | orchestrator | Tuesday 31 March 2026 05:00:52 +0000 (0:00:00.220) 0:26:24.747 ********* 2026-03-31 05:01:01.012980 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3 2026-03-31 05:01:01.012992 | orchestrator | 2026-03-31 05:01:01.013009 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-31 05:01:01.013020 | orchestrator | Tuesday 31 March 2026 05:00:52 +0000 (0:00:00.204) 0:26:24.952 ********* 2026-03-31 05:01:01.013031 | orchestrator | ok: [testbed-node-3] 2026-03-31 05:01:01.013042 | orchestrator | 2026-03-31 05:01:01.013053 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-31 05:01:01.013064 | orchestrator | Tuesday 31 March 2026 05:00:53 +0000 (0:00:01.075) 0:26:26.028 ********* 2026-03-31 05:01:01.013075 | orchestrator | ok: [testbed-node-3] 2026-03-31 05:01:01.013087 | orchestrator | 2026-03-31 05:01:01.013098 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-31 05:01:01.013109 | orchestrator | Tuesday 31 March 2026 05:00:54 +0000 (0:00:00.890) 0:26:26.919 ********* 2026-03-31 05:01:01.013120 | orchestrator | ok: [testbed-node-3] 2026-03-31 05:01:01.013131 | orchestrator | 2026-03-31 05:01:01.013142 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-31 05:01:01.013153 | orchestrator | Tuesday 31 March 2026 05:00:55 +0000 (0:00:01.286) 0:26:28.205 ********* 2026-03-31 05:01:01.013164 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-31 05:01:01.013175 | orchestrator | 2026-03-31 05:01:01.013187 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-03-31 05:01:01.013198 | orchestrator | 2026-03-31 05:01:01.013209 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-31 05:01:01.013220 | orchestrator | Tuesday 31 March 2026 05:00:57 +0000 (0:00:02.232) 0:26:30.437 ********* 2026-03-31 05:01:01.013231 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-03-31 05:01:01.013242 | orchestrator | 2026-03-31 05:01:01.013253 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-31 05:01:01.013264 | orchestrator | Tuesday 31 March 2026 05:00:58 +0000 (0:00:00.270) 0:26:30.708 ********* 2026-03-31 05:01:01.013288 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:01:01.013299 | orchestrator | 2026-03-31 05:01:01.013310 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-31 05:01:01.013321 | orchestrator | Tuesday 31 March 2026 05:00:58 +0000 (0:00:00.491) 0:26:31.200 ********* 2026-03-31 05:01:01.013332 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:01:01.013343 | orchestrator | 2026-03-31 05:01:01.013354 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-31 05:01:01.013365 | orchestrator | Tuesday 31 March 2026 05:00:58 +0000 (0:00:00.148) 0:26:31.348 ********* 2026-03-31 05:01:01.013377 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:01:01.013388 | orchestrator | 2026-03-31 05:01:01.013399 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-31 05:01:01.013410 | orchestrator | Tuesday 31 March 2026 05:00:59 +0000 (0:00:00.486) 0:26:31.834 ********* 2026-03-31 05:01:01.013421 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:01:01.013432 | orchestrator | 2026-03-31 05:01:01.013443 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-31 05:01:01.013454 | orchestrator | Tuesday 31 March 2026 05:00:59 +0000 (0:00:00.148) 0:26:31.983 ********* 2026-03-31 05:01:01.013465 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:01:01.013476 | orchestrator | 2026-03-31 05:01:01.013487 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-31 05:01:01.013498 | orchestrator | Tuesday 31 March 2026 05:00:59 +0000 (0:00:00.142) 0:26:32.126 ********* 2026-03-31 05:01:01.013509 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:01:01.013520 | orchestrator | 2026-03-31 05:01:01.013531 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-31 05:01:01.013542 | orchestrator | Tuesday 31 March 2026 05:00:59 +0000 (0:00:00.157) 0:26:32.284 ********* 2026-03-31 05:01:01.013554 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:01.013565 | orchestrator | 2026-03-31 05:01:01.013576 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-31 05:01:01.013587 | orchestrator | Tuesday 31 March 2026 05:00:59 +0000 (0:00:00.163) 0:26:32.448 ********* 2026-03-31 05:01:01.013598 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:01:01.013609 | orchestrator | 2026-03-31 05:01:01.013620 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-31 05:01:01.013631 | orchestrator | Tuesday 31 March 2026 05:00:59 +0000 (0:00:00.155) 0:26:32.604 ********* 2026-03-31 05:01:01.013642 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 05:01:01.013673 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 05:01:01.013685 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 05:01:01.013696 | orchestrator | 2026-03-31 05:01:01.013707 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-31 05:01:01.013725 | orchestrator | Tuesday 31 March 2026 05:01:00 +0000 (0:00:01.071) 0:26:33.675 ********* 2026-03-31 05:01:08.592257 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:01:08.592379 | orchestrator | 2026-03-31 05:01:08.592404 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-31 05:01:08.592424 | orchestrator | Tuesday 31 March 2026 05:01:01 +0000 (0:00:00.285) 0:26:33.961 ********* 2026-03-31 05:01:08.592444 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 05:01:08.592465 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 05:01:08.592485 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 05:01:08.592505 | orchestrator | 2026-03-31 05:01:08.592517 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-31 05:01:08.592528 | orchestrator | Tuesday 31 March 2026 05:01:03 +0000 (0:00:02.595) 0:26:36.557 ********* 2026-03-31 05:01:08.592566 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-31 05:01:08.592592 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-31 05:01:08.592604 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-31 05:01:08.592615 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:08.592627 | orchestrator | 2026-03-31 05:01:08.592638 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-31 05:01:08.592725 | orchestrator | Tuesday 31 March 2026 05:01:04 +0000 (0:00:00.411) 0:26:36.969 ********* 2026-03-31 05:01:08.592796 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-31 05:01:08.592842 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-31 05:01:08.592863 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-31 05:01:08.592877 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:08.592891 | orchestrator | 2026-03-31 05:01:08.592905 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-31 05:01:08.592918 | orchestrator | Tuesday 31 March 2026 05:01:04 +0000 (0:00:00.629) 0:26:37.598 ********* 2026-03-31 05:01:08.592934 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 05:01:08.592951 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 05:01:08.592966 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 05:01:08.592979 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:08.592992 | orchestrator | 2026-03-31 05:01:08.593006 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-31 05:01:08.593019 | orchestrator | Tuesday 31 March 2026 05:01:05 +0000 (0:00:00.199) 0:26:37.797 ********* 2026-03-31 05:01:08.593057 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '2a470704af4f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-31 05:01:02.167981', 'end': '2026-03-31 05:01:02.213700', 'delta': '0:00:00.045719', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2a470704af4f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-31 05:01:08.593097 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '72281537ffe8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-31 05:01:02.760651', 'end': '2026-03-31 05:01:02.808659', 'delta': '0:00:00.048008', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['72281537ffe8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-31 05:01:08.593114 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '4f3969f3506a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-31 05:01:03.342374', 'end': '2026-03-31 05:01:03.383445', 'delta': '0:00:00.041071', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['4f3969f3506a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-31 05:01:08.593128 | orchestrator | 2026-03-31 05:01:08.593140 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-31 05:01:08.593151 | orchestrator | Tuesday 31 March 2026 05:01:05 +0000 (0:00:00.245) 0:26:38.043 ********* 2026-03-31 05:01:08.593162 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:01:08.593174 | orchestrator | 2026-03-31 05:01:08.593185 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-31 05:01:08.593196 | orchestrator | Tuesday 31 March 2026 05:01:05 +0000 (0:00:00.284) 0:26:38.327 ********* 2026-03-31 05:01:08.593207 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:08.593218 | orchestrator | 2026-03-31 05:01:08.593229 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-31 05:01:08.593240 | orchestrator | Tuesday 31 March 2026 05:01:05 +0000 (0:00:00.272) 0:26:38.600 ********* 2026-03-31 05:01:08.593251 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:01:08.593263 | orchestrator | 2026-03-31 05:01:08.593274 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-31 05:01:08.593285 | orchestrator | Tuesday 31 March 2026 05:01:06 +0000 (0:00:00.141) 0:26:38.742 ********* 2026-03-31 05:01:08.593296 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-31 05:01:08.593307 | orchestrator | 2026-03-31 05:01:08.593319 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-31 05:01:08.593330 | orchestrator | Tuesday 31 March 2026 05:01:07 +0000 (0:00:00.972) 0:26:39.714 ********* 2026-03-31 05:01:08.593341 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:01:08.593352 | orchestrator | 2026-03-31 05:01:08.593363 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-31 05:01:08.593374 | orchestrator | Tuesday 31 March 2026 05:01:07 +0000 (0:00:00.148) 0:26:39.862 ********* 2026-03-31 05:01:08.593385 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:08.593396 | orchestrator | 2026-03-31 05:01:08.593407 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-31 05:01:08.593418 | orchestrator | Tuesday 31 March 2026 05:01:07 +0000 (0:00:00.124) 0:26:39.986 ********* 2026-03-31 05:01:08.593429 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:08.593441 | orchestrator | 2026-03-31 05:01:08.593452 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-31 05:01:08.593463 | orchestrator | Tuesday 31 March 2026 05:01:07 +0000 (0:00:00.242) 0:26:40.229 ********* 2026-03-31 05:01:08.593481 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:08.593493 | orchestrator | 2026-03-31 05:01:08.593504 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-31 05:01:08.593515 | orchestrator | Tuesday 31 March 2026 05:01:07 +0000 (0:00:00.118) 0:26:40.348 ********* 2026-03-31 05:01:08.593526 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:08.593537 | orchestrator | 2026-03-31 05:01:08.593548 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-31 05:01:08.593559 | orchestrator | Tuesday 31 March 2026 05:01:07 +0000 (0:00:00.127) 0:26:40.475 ********* 2026-03-31 05:01:08.593570 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:01:08.593582 | orchestrator | 2026-03-31 05:01:08.593593 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-31 05:01:08.593604 | orchestrator | Tuesday 31 March 2026 05:01:08 +0000 (0:00:00.481) 0:26:40.956 ********* 2026-03-31 05:01:08.593615 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:08.593626 | orchestrator | 2026-03-31 05:01:08.593637 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-31 05:01:08.593690 | orchestrator | Tuesday 31 March 2026 05:01:08 +0000 (0:00:00.139) 0:26:41.095 ********* 2026-03-31 05:01:08.593701 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:01:08.593712 | orchestrator | 2026-03-31 05:01:08.593723 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-31 05:01:08.593743 | orchestrator | Tuesday 31 March 2026 05:01:08 +0000 (0:00:00.169) 0:26:41.265 ********* 2026-03-31 05:01:09.120611 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:09.120800 | orchestrator | 2026-03-31 05:01:09.120826 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-31 05:01:09.120845 | orchestrator | Tuesday 31 March 2026 05:01:08 +0000 (0:00:00.132) 0:26:41.398 ********* 2026-03-31 05:01:09.120864 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:01:09.120883 | orchestrator | 2026-03-31 05:01:09.120901 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-31 05:01:09.120919 | orchestrator | Tuesday 31 March 2026 05:01:08 +0000 (0:00:00.164) 0:26:41.563 ********* 2026-03-31 05:01:09.120962 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 05:01:09.120989 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--da0b55d5--13d5--528b--aee2--5667f342587c-osd--block--da0b55d5--13d5--528b--aee2--5667f342587c', 'dm-uuid-LVM-voIvMScBNf0nn1UqP6J3mrL57Feo8hpsEfbBIXBLL2lbnvB5fpXdf3Vs7Oc4nA8j'], 'uuids': ['26974dbf-f0a7-4ca8-8b18-f9eb0862be76'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'aca90cda', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['EfbBIX-BLL2-lbnv-B5fp-Xdf3-Vs7O-c4nA8j']}})  2026-03-31 05:01:09.121013 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a64e844-a251-4ee7-a817-d55da64d6351', 'scsi-SQEMU_QEMU_HARDDISK_5a64e844-a251-4ee7-a817-d55da64d6351'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5a64e844', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-31 05:01:09.121065 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-jppFpT-6287-H5UX-wadw-idvL-aDwi-H3fsQH', 'scsi-0QEMU_QEMU_HARDDISK_627ac388-afe2-405e-bfb6-93a96eeb5247', 'scsi-SQEMU_QEMU_HARDDISK_627ac388-afe2-405e-bfb6-93a96eeb5247'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '627ac388', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb-osd--block--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb']}})  2026-03-31 05:01:09.121133 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 05:01:09.121151 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 05:01:09.121190 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-31-01-38-47-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-31 05:01:09.121213 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 05:01:09.121229 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-jN9Ywl-XbnL-hNii-unic-nne9-TiGA-xFnCN2', 'dm-uuid-CRYPT-LUKS2-c911a2b9ffbe4994aafa7327c1153c91-jN9Ywl-XbnL-hNii-unic-nne9-TiGA-xFnCN2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-31 05:01:09.121242 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 05:01:09.121257 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb-osd--block--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb', 'dm-uuid-LVM-RwD1SDPPywNrcOLsCdJUWJCkPqisEw7IjN9YwlXbnLhNiiunicnne9TiGAxFnCN2'], 'uuids': ['c911a2b9-ffbe-4994-aafa-7327c1153c91'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '627ac388', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['jN9Ywl-XbnL-hNii-unic-nne9-TiGA-xFnCN2']}})  2026-03-31 05:01:09.121282 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-pfZnnD-Ultt-g92I-R3gj-okuR-Ezub-rBAf3f', 'scsi-0QEMU_QEMU_HARDDISK_aca90cda-810a-4a3a-a8a4-a9246b552814', 'scsi-SQEMU_QEMU_HARDDISK_aca90cda-810a-4a3a-a8a4-a9246b552814'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'aca90cda', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--da0b55d5--13d5--528b--aee2--5667f342587c-osd--block--da0b55d5--13d5--528b--aee2--5667f342587c']}})  2026-03-31 05:01:09.121297 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 05:01:09.121334 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9459331e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part16', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part14', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part15', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part1', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-31 05:01:09.454527 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 05:01:09.454735 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 05:01:09.454766 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-EfbBIX-BLL2-lbnv-B5fp-Xdf3-Vs7O-c4nA8j', 'dm-uuid-CRYPT-LUKS2-26974dbff0a74ca88b18f9eb0862be76-EfbBIX-BLL2-lbnv-B5fp-Xdf3-Vs7O-c4nA8j'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-31 05:01:09.454784 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:09.454801 | orchestrator | 2026-03-31 05:01:09.454816 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-31 05:01:09.454830 | orchestrator | Tuesday 31 March 2026 05:01:09 +0000 (0:00:00.353) 0:26:41.917 ********* 2026-03-31 05:01:09.454846 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 05:01:09.454881 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--da0b55d5--13d5--528b--aee2--5667f342587c-osd--block--da0b55d5--13d5--528b--aee2--5667f342587c', 'dm-uuid-LVM-voIvMScBNf0nn1UqP6J3mrL57Feo8hpsEfbBIXBLL2lbnvB5fpXdf3Vs7Oc4nA8j'], 'uuids': ['26974dbf-f0a7-4ca8-8b18-f9eb0862be76'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'aca90cda', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['EfbBIX-BLL2-lbnv-B5fp-Xdf3-Vs7O-c4nA8j']}}, 'ansible_loop_var': 'item'})  2026-03-31 05:01:09.454899 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a64e844-a251-4ee7-a817-d55da64d6351', 'scsi-SQEMU_QEMU_HARDDISK_5a64e844-a251-4ee7-a817-d55da64d6351'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5a64e844', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 05:01:09.455008 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-jppFpT-6287-H5UX-wadw-idvL-aDwi-H3fsQH', 'scsi-0QEMU_QEMU_HARDDISK_627ac388-afe2-405e-bfb6-93a96eeb5247', 'scsi-SQEMU_QEMU_HARDDISK_627ac388-afe2-405e-bfb6-93a96eeb5247'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '627ac388', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb-osd--block--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb']}}, 'ansible_loop_var': 'item'})  2026-03-31 05:01:09.455032 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 05:01:09.455049 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 05:01:09.455066 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-31-01-38-47-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 05:01:09.455090 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 05:01:09.455117 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-jN9Ywl-XbnL-hNii-unic-nne9-TiGA-xFnCN2', 'dm-uuid-CRYPT-LUKS2-c911a2b9ffbe4994aafa7327c1153c91-jN9Ywl-XbnL-hNii-unic-nne9-TiGA-xFnCN2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 05:01:10.760943 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 05:01:10.761026 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb-osd--block--ff2f0fdf--59cf--5ca7--9eb2--a45b4abb67eb', 'dm-uuid-LVM-RwD1SDPPywNrcOLsCdJUWJCkPqisEw7IjN9YwlXbnLhNiiunicnne9TiGAxFnCN2'], 'uuids': ['c911a2b9-ffbe-4994-aafa-7327c1153c91'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '627ac388', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['jN9Ywl-XbnL-hNii-unic-nne9-TiGA-xFnCN2']}}, 'ansible_loop_var': 'item'})  2026-03-31 05:01:10.761037 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-pfZnnD-Ultt-g92I-R3gj-okuR-Ezub-rBAf3f', 'scsi-0QEMU_QEMU_HARDDISK_aca90cda-810a-4a3a-a8a4-a9246b552814', 'scsi-SQEMU_QEMU_HARDDISK_aca90cda-810a-4a3a-a8a4-a9246b552814'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'aca90cda', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--da0b55d5--13d5--528b--aee2--5667f342587c-osd--block--da0b55d5--13d5--528b--aee2--5667f342587c']}}, 'ansible_loop_var': 'item'})  2026-03-31 05:01:10.761060 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 05:01:10.761082 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9459331e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part16', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part14', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part15', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part1', 'scsi-SQEMU_QEMU_HARDDISK_9459331e-414f-4bad-a4cf-8aef28266031-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 05:01:10.761111 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 05:01:10.761122 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 05:01:10.761137 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-EfbBIX-BLL2-lbnv-B5fp-Xdf3-Vs7O-c4nA8j', 'dm-uuid-CRYPT-LUKS2-26974dbff0a74ca88b18f9eb0862be76-EfbBIX-BLL2-lbnv-B5fp-Xdf3-Vs7O-c4nA8j'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 05:01:10.761149 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:10.761183 | orchestrator | 2026-03-31 05:01:10.761199 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-31 05:01:10.761207 | orchestrator | Tuesday 31 March 2026 05:01:09 +0000 (0:00:00.394) 0:26:42.312 ********* 2026-03-31 05:01:10.761213 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:01:10.761220 | orchestrator | 2026-03-31 05:01:10.761226 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-31 05:01:10.761232 | orchestrator | Tuesday 31 March 2026 05:01:10 +0000 (0:00:00.514) 0:26:42.826 ********* 2026-03-31 05:01:10.761238 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:01:10.761244 | orchestrator | 2026-03-31 05:01:10.761252 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-31 05:01:10.761262 | orchestrator | Tuesday 31 March 2026 05:01:10 +0000 (0:00:00.133) 0:26:42.960 ********* 2026-03-31 05:01:10.761272 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:01:10.761283 | orchestrator | 2026-03-31 05:01:10.761293 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-31 05:01:10.761307 | orchestrator | Tuesday 31 March 2026 05:01:10 +0000 (0:00:00.472) 0:26:43.432 ********* 2026-03-31 05:01:26.179496 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:26.179681 | orchestrator | 2026-03-31 05:01:26.179700 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-31 05:01:26.179714 | orchestrator | Tuesday 31 March 2026 05:01:10 +0000 (0:00:00.152) 0:26:43.584 ********* 2026-03-31 05:01:26.179726 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:26.179737 | orchestrator | 2026-03-31 05:01:26.179749 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-31 05:01:26.179761 | orchestrator | Tuesday 31 March 2026 05:01:11 +0000 (0:00:00.225) 0:26:43.810 ********* 2026-03-31 05:01:26.179772 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:26.179784 | orchestrator | 2026-03-31 05:01:26.179795 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-31 05:01:26.179807 | orchestrator | Tuesday 31 March 2026 05:01:11 +0000 (0:00:00.136) 0:26:43.947 ********* 2026-03-31 05:01:26.179819 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-31 05:01:26.179831 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-31 05:01:26.179842 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-31 05:01:26.179853 | orchestrator | 2026-03-31 05:01:26.179865 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-31 05:01:26.179876 | orchestrator | Tuesday 31 March 2026 05:01:12 +0000 (0:00:01.341) 0:26:45.288 ********* 2026-03-31 05:01:26.179887 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-31 05:01:26.179899 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-31 05:01:26.179910 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-31 05:01:26.179921 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:26.179933 | orchestrator | 2026-03-31 05:01:26.179947 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-31 05:01:26.179961 | orchestrator | Tuesday 31 March 2026 05:01:12 +0000 (0:00:00.173) 0:26:45.462 ********* 2026-03-31 05:01:26.179974 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-03-31 05:01:26.179987 | orchestrator | 2026-03-31 05:01:26.180000 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-31 05:01:26.180015 | orchestrator | Tuesday 31 March 2026 05:01:12 +0000 (0:00:00.218) 0:26:45.681 ********* 2026-03-31 05:01:26.180029 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:26.180042 | orchestrator | 2026-03-31 05:01:26.180055 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-31 05:01:26.180068 | orchestrator | Tuesday 31 March 2026 05:01:13 +0000 (0:00:00.156) 0:26:45.837 ********* 2026-03-31 05:01:26.180082 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:26.180095 | orchestrator | 2026-03-31 05:01:26.180108 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-31 05:01:26.180150 | orchestrator | Tuesday 31 March 2026 05:01:13 +0000 (0:00:00.137) 0:26:45.974 ********* 2026-03-31 05:01:26.180163 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:26.180175 | orchestrator | 2026-03-31 05:01:26.180189 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-31 05:01:26.180202 | orchestrator | Tuesday 31 March 2026 05:01:13 +0000 (0:00:00.148) 0:26:46.122 ********* 2026-03-31 05:01:26.180215 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:01:26.180228 | orchestrator | 2026-03-31 05:01:26.180241 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-31 05:01:26.180254 | orchestrator | Tuesday 31 March 2026 05:01:13 +0000 (0:00:00.231) 0:26:46.353 ********* 2026-03-31 05:01:26.180267 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-31 05:01:26.180281 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-31 05:01:26.180292 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-31 05:01:26.180303 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:26.180314 | orchestrator | 2026-03-31 05:01:26.180325 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-31 05:01:26.180336 | orchestrator | Tuesday 31 March 2026 05:01:14 +0000 (0:00:00.432) 0:26:46.786 ********* 2026-03-31 05:01:26.180363 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-31 05:01:26.180375 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-31 05:01:26.180386 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-31 05:01:26.180397 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:26.180408 | orchestrator | 2026-03-31 05:01:26.180419 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-31 05:01:26.180430 | orchestrator | Tuesday 31 March 2026 05:01:14 +0000 (0:00:00.408) 0:26:47.195 ********* 2026-03-31 05:01:26.180441 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-31 05:01:26.180452 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-31 05:01:26.180462 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-31 05:01:26.180473 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:26.180484 | orchestrator | 2026-03-31 05:01:26.180495 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-31 05:01:26.180506 | orchestrator | Tuesday 31 March 2026 05:01:14 +0000 (0:00:00.390) 0:26:47.585 ********* 2026-03-31 05:01:26.180517 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:01:26.180528 | orchestrator | 2026-03-31 05:01:26.180539 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-31 05:01:26.180550 | orchestrator | Tuesday 31 March 2026 05:01:15 +0000 (0:00:00.152) 0:26:47.738 ********* 2026-03-31 05:01:26.180588 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-31 05:01:26.180600 | orchestrator | 2026-03-31 05:01:26.180611 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-31 05:01:26.180639 | orchestrator | Tuesday 31 March 2026 05:01:15 +0000 (0:00:00.685) 0:26:48.424 ********* 2026-03-31 05:01:26.180670 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 05:01:26.180682 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 05:01:26.180693 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 05:01:26.180704 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-31 05:01:26.180714 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-03-31 05:01:26.180725 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-31 05:01:26.180736 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-31 05:01:26.180747 | orchestrator | 2026-03-31 05:01:26.180758 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-31 05:01:26.180778 | orchestrator | Tuesday 31 March 2026 05:01:17 +0000 (0:00:01.399) 0:26:49.823 ********* 2026-03-31 05:01:26.180790 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 05:01:26.180801 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 05:01:26.180812 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 05:01:26.180823 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-31 05:01:26.180834 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-03-31 05:01:26.180845 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-31 05:01:26.180856 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-31 05:01:26.180867 | orchestrator | 2026-03-31 05:01:26.180878 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-03-31 05:01:26.180889 | orchestrator | Tuesday 31 March 2026 05:01:18 +0000 (0:00:01.592) 0:26:51.415 ********* 2026-03-31 05:01:26.180900 | orchestrator | changed: [testbed-node-4] 2026-03-31 05:01:26.180910 | orchestrator | 2026-03-31 05:01:26.180921 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-03-31 05:01:26.180932 | orchestrator | Tuesday 31 March 2026 05:01:19 +0000 (0:00:01.245) 0:26:52.661 ********* 2026-03-31 05:01:26.180943 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-31 05:01:26.180955 | orchestrator | 2026-03-31 05:01:26.180966 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-03-31 05:01:26.180977 | orchestrator | Tuesday 31 March 2026 05:01:21 +0000 (0:00:01.999) 0:26:54.660 ********* 2026-03-31 05:01:26.180988 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-31 05:01:26.180999 | orchestrator | 2026-03-31 05:01:26.181009 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-31 05:01:26.181020 | orchestrator | Tuesday 31 March 2026 05:01:23 +0000 (0:00:01.324) 0:26:55.985 ********* 2026-03-31 05:01:26.181031 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-03-31 05:01:26.181042 | orchestrator | 2026-03-31 05:01:26.181053 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-31 05:01:26.181064 | orchestrator | Tuesday 31 March 2026 05:01:23 +0000 (0:00:00.192) 0:26:56.178 ********* 2026-03-31 05:01:26.181075 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-03-31 05:01:26.181086 | orchestrator | 2026-03-31 05:01:26.181097 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-31 05:01:26.181108 | orchestrator | Tuesday 31 March 2026 05:01:23 +0000 (0:00:00.208) 0:26:56.386 ********* 2026-03-31 05:01:26.181118 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:26.181129 | orchestrator | 2026-03-31 05:01:26.181146 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-31 05:01:26.181158 | orchestrator | Tuesday 31 March 2026 05:01:23 +0000 (0:00:00.131) 0:26:56.518 ********* 2026-03-31 05:01:26.181169 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:01:26.181180 | orchestrator | 2026-03-31 05:01:26.181191 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-31 05:01:26.181202 | orchestrator | Tuesday 31 March 2026 05:01:24 +0000 (0:00:00.504) 0:26:57.023 ********* 2026-03-31 05:01:26.181212 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:01:26.181223 | orchestrator | 2026-03-31 05:01:26.181234 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-31 05:01:26.181245 | orchestrator | Tuesday 31 March 2026 05:01:24 +0000 (0:00:00.549) 0:26:57.572 ********* 2026-03-31 05:01:26.181262 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:01:26.181273 | orchestrator | 2026-03-31 05:01:26.181284 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-31 05:01:26.181295 | orchestrator | Tuesday 31 March 2026 05:01:25 +0000 (0:00:00.822) 0:26:58.395 ********* 2026-03-31 05:01:26.181306 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:26.181317 | orchestrator | 2026-03-31 05:01:26.181328 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-31 05:01:26.181339 | orchestrator | Tuesday 31 March 2026 05:01:25 +0000 (0:00:00.142) 0:26:58.538 ********* 2026-03-31 05:01:26.181350 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:26.181361 | orchestrator | 2026-03-31 05:01:26.181372 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-31 05:01:26.181383 | orchestrator | Tuesday 31 March 2026 05:01:26 +0000 (0:00:00.162) 0:26:58.701 ********* 2026-03-31 05:01:26.181394 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:26.181405 | orchestrator | 2026-03-31 05:01:26.181416 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-31 05:01:26.181433 | orchestrator | Tuesday 31 March 2026 05:01:26 +0000 (0:00:00.145) 0:26:58.846 ********* 2026-03-31 05:01:37.315110 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:01:37.315235 | orchestrator | 2026-03-31 05:01:37.315254 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-31 05:01:37.315267 | orchestrator | Tuesday 31 March 2026 05:01:26 +0000 (0:00:00.549) 0:26:59.396 ********* 2026-03-31 05:01:37.315279 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:01:37.315290 | orchestrator | 2026-03-31 05:01:37.315301 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-31 05:01:37.315313 | orchestrator | Tuesday 31 March 2026 05:01:27 +0000 (0:00:00.519) 0:26:59.915 ********* 2026-03-31 05:01:37.315324 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:37.315336 | orchestrator | 2026-03-31 05:01:37.315347 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-31 05:01:37.315358 | orchestrator | Tuesday 31 March 2026 05:01:27 +0000 (0:00:00.127) 0:27:00.043 ********* 2026-03-31 05:01:37.315369 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:37.315381 | orchestrator | 2026-03-31 05:01:37.315392 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-31 05:01:37.315404 | orchestrator | Tuesday 31 March 2026 05:01:27 +0000 (0:00:00.116) 0:27:00.160 ********* 2026-03-31 05:01:37.315415 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:01:37.315426 | orchestrator | 2026-03-31 05:01:37.315437 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-31 05:01:37.315448 | orchestrator | Tuesday 31 March 2026 05:01:27 +0000 (0:00:00.155) 0:27:00.315 ********* 2026-03-31 05:01:37.315459 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:01:37.315470 | orchestrator | 2026-03-31 05:01:37.315481 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-31 05:01:37.315492 | orchestrator | Tuesday 31 March 2026 05:01:27 +0000 (0:00:00.170) 0:27:00.485 ********* 2026-03-31 05:01:37.315503 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:01:37.315514 | orchestrator | 2026-03-31 05:01:37.315525 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-31 05:01:37.315536 | orchestrator | Tuesday 31 March 2026 05:01:27 +0000 (0:00:00.150) 0:27:00.636 ********* 2026-03-31 05:01:37.315548 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:37.315559 | orchestrator | 2026-03-31 05:01:37.315570 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-31 05:01:37.315581 | orchestrator | Tuesday 31 March 2026 05:01:28 +0000 (0:00:00.144) 0:27:00.780 ********* 2026-03-31 05:01:37.315592 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:37.315638 | orchestrator | 2026-03-31 05:01:37.315661 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-31 05:01:37.315681 | orchestrator | Tuesday 31 March 2026 05:01:28 +0000 (0:00:00.137) 0:27:00.918 ********* 2026-03-31 05:01:37.315731 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:37.315752 | orchestrator | 2026-03-31 05:01:37.315772 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-31 05:01:37.315785 | orchestrator | Tuesday 31 March 2026 05:01:28 +0000 (0:00:00.460) 0:27:01.379 ********* 2026-03-31 05:01:37.315798 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:01:37.315811 | orchestrator | 2026-03-31 05:01:37.315823 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-31 05:01:37.315836 | orchestrator | Tuesday 31 March 2026 05:01:28 +0000 (0:00:00.150) 0:27:01.529 ********* 2026-03-31 05:01:37.315848 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:01:37.315861 | orchestrator | 2026-03-31 05:01:37.315873 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-31 05:01:37.315886 | orchestrator | Tuesday 31 March 2026 05:01:29 +0000 (0:00:00.237) 0:27:01.766 ********* 2026-03-31 05:01:37.315898 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:37.315912 | orchestrator | 2026-03-31 05:01:37.315925 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-31 05:01:37.315937 | orchestrator | Tuesday 31 March 2026 05:01:29 +0000 (0:00:00.148) 0:27:01.915 ********* 2026-03-31 05:01:37.315950 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:37.315963 | orchestrator | 2026-03-31 05:01:37.315976 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-31 05:01:37.315988 | orchestrator | Tuesday 31 March 2026 05:01:29 +0000 (0:00:00.117) 0:27:02.033 ********* 2026-03-31 05:01:37.316001 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:37.316014 | orchestrator | 2026-03-31 05:01:37.316043 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-31 05:01:37.316055 | orchestrator | Tuesday 31 March 2026 05:01:29 +0000 (0:00:00.158) 0:27:02.192 ********* 2026-03-31 05:01:37.316066 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:37.316077 | orchestrator | 2026-03-31 05:01:37.316088 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-31 05:01:37.316099 | orchestrator | Tuesday 31 March 2026 05:01:29 +0000 (0:00:00.139) 0:27:02.331 ********* 2026-03-31 05:01:37.316110 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:37.316122 | orchestrator | 2026-03-31 05:01:37.316133 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-31 05:01:37.316144 | orchestrator | Tuesday 31 March 2026 05:01:29 +0000 (0:00:00.172) 0:27:02.504 ********* 2026-03-31 05:01:37.316155 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:37.316166 | orchestrator | 2026-03-31 05:01:37.316177 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-31 05:01:37.316188 | orchestrator | Tuesday 31 March 2026 05:01:29 +0000 (0:00:00.142) 0:27:02.646 ********* 2026-03-31 05:01:37.316199 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:37.316210 | orchestrator | 2026-03-31 05:01:37.316221 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-31 05:01:37.316233 | orchestrator | Tuesday 31 March 2026 05:01:30 +0000 (0:00:00.158) 0:27:02.805 ********* 2026-03-31 05:01:37.316244 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:37.316255 | orchestrator | 2026-03-31 05:01:37.316266 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-31 05:01:37.316277 | orchestrator | Tuesday 31 March 2026 05:01:30 +0000 (0:00:00.126) 0:27:02.932 ********* 2026-03-31 05:01:37.316288 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:37.316299 | orchestrator | 2026-03-31 05:01:37.316330 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-31 05:01:37.316342 | orchestrator | Tuesday 31 March 2026 05:01:30 +0000 (0:00:00.146) 0:27:03.078 ********* 2026-03-31 05:01:37.316353 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:37.316364 | orchestrator | 2026-03-31 05:01:37.316375 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-31 05:01:37.316385 | orchestrator | Tuesday 31 March 2026 05:01:30 +0000 (0:00:00.122) 0:27:03.201 ********* 2026-03-31 05:01:37.316407 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:37.316418 | orchestrator | 2026-03-31 05:01:37.316429 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-31 05:01:37.316440 | orchestrator | Tuesday 31 March 2026 05:01:30 +0000 (0:00:00.478) 0:27:03.680 ********* 2026-03-31 05:01:37.316451 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:37.316462 | orchestrator | 2026-03-31 05:01:37.316473 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-31 05:01:37.316484 | orchestrator | Tuesday 31 March 2026 05:01:31 +0000 (0:00:00.243) 0:27:03.923 ********* 2026-03-31 05:01:37.316495 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:01:37.316514 | orchestrator | 2026-03-31 05:01:37.316532 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-31 05:01:37.316552 | orchestrator | Tuesday 31 March 2026 05:01:32 +0000 (0:00:00.900) 0:27:04.824 ********* 2026-03-31 05:01:37.316579 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:01:37.316597 | orchestrator | 2026-03-31 05:01:37.316643 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-31 05:01:37.316661 | orchestrator | Tuesday 31 March 2026 05:01:33 +0000 (0:00:01.249) 0:27:06.074 ********* 2026-03-31 05:01:37.316678 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-03-31 05:01:37.316697 | orchestrator | 2026-03-31 05:01:37.316715 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-31 05:01:37.316732 | orchestrator | Tuesday 31 March 2026 05:01:33 +0000 (0:00:00.207) 0:27:06.281 ********* 2026-03-31 05:01:37.316750 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:37.316770 | orchestrator | 2026-03-31 05:01:37.316788 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-31 05:01:37.316803 | orchestrator | Tuesday 31 March 2026 05:01:33 +0000 (0:00:00.129) 0:27:06.411 ********* 2026-03-31 05:01:37.316814 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:37.316824 | orchestrator | 2026-03-31 05:01:37.316835 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-31 05:01:37.316846 | orchestrator | Tuesday 31 March 2026 05:01:33 +0000 (0:00:00.144) 0:27:06.555 ********* 2026-03-31 05:01:37.316857 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-31 05:01:37.316868 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-31 05:01:37.316879 | orchestrator | 2026-03-31 05:01:37.316890 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-31 05:01:37.316901 | orchestrator | Tuesday 31 March 2026 05:01:34 +0000 (0:00:00.806) 0:27:07.361 ********* 2026-03-31 05:01:37.316912 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:01:37.316923 | orchestrator | 2026-03-31 05:01:37.316934 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-31 05:01:37.316945 | orchestrator | Tuesday 31 March 2026 05:01:35 +0000 (0:00:00.444) 0:27:07.806 ********* 2026-03-31 05:01:37.316956 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:37.316966 | orchestrator | 2026-03-31 05:01:37.316977 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-31 05:01:37.316988 | orchestrator | Tuesday 31 March 2026 05:01:35 +0000 (0:00:00.151) 0:27:07.957 ********* 2026-03-31 05:01:37.316999 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:37.317010 | orchestrator | 2026-03-31 05:01:37.317021 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-31 05:01:37.317032 | orchestrator | Tuesday 31 March 2026 05:01:35 +0000 (0:00:00.169) 0:27:08.127 ********* 2026-03-31 05:01:37.317043 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:37.317054 | orchestrator | 2026-03-31 05:01:37.317065 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-31 05:01:37.317083 | orchestrator | Tuesday 31 March 2026 05:01:35 +0000 (0:00:00.441) 0:27:08.569 ********* 2026-03-31 05:01:37.317095 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-03-31 05:01:37.317117 | orchestrator | 2026-03-31 05:01:37.317128 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-31 05:01:37.317139 | orchestrator | Tuesday 31 March 2026 05:01:36 +0000 (0:00:00.233) 0:27:08.802 ********* 2026-03-31 05:01:37.317149 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:01:37.317160 | orchestrator | 2026-03-31 05:01:37.317171 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-31 05:01:37.317182 | orchestrator | Tuesday 31 March 2026 05:01:36 +0000 (0:00:00.690) 0:27:09.492 ********* 2026-03-31 05:01:37.317193 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-31 05:01:37.317204 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-31 05:01:37.317215 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-31 05:01:37.317226 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:37.317237 | orchestrator | 2026-03-31 05:01:37.317248 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-31 05:01:37.317259 | orchestrator | Tuesday 31 March 2026 05:01:36 +0000 (0:00:00.158) 0:27:09.651 ********* 2026-03-31 05:01:37.317269 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:37.317280 | orchestrator | 2026-03-31 05:01:37.317291 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-31 05:01:37.317302 | orchestrator | Tuesday 31 March 2026 05:01:37 +0000 (0:00:00.130) 0:27:09.782 ********* 2026-03-31 05:01:37.317313 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:37.317324 | orchestrator | 2026-03-31 05:01:37.317345 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-31 05:01:54.477517 | orchestrator | Tuesday 31 March 2026 05:01:37 +0000 (0:00:00.201) 0:27:09.983 ********* 2026-03-31 05:01:54.477697 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:54.477716 | orchestrator | 2026-03-31 05:01:54.477730 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-31 05:01:54.477742 | orchestrator | Tuesday 31 March 2026 05:01:37 +0000 (0:00:00.161) 0:27:10.145 ********* 2026-03-31 05:01:54.477753 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:54.477764 | orchestrator | 2026-03-31 05:01:54.477775 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-31 05:01:54.477787 | orchestrator | Tuesday 31 March 2026 05:01:37 +0000 (0:00:00.152) 0:27:10.297 ********* 2026-03-31 05:01:54.477798 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:54.477809 | orchestrator | 2026-03-31 05:01:54.477820 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-31 05:01:54.477831 | orchestrator | Tuesday 31 March 2026 05:01:37 +0000 (0:00:00.158) 0:27:10.455 ********* 2026-03-31 05:01:54.477843 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:01:54.477855 | orchestrator | 2026-03-31 05:01:54.477866 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-31 05:01:54.477877 | orchestrator | Tuesday 31 March 2026 05:01:39 +0000 (0:00:01.528) 0:27:11.984 ********* 2026-03-31 05:01:54.477888 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:01:54.477899 | orchestrator | 2026-03-31 05:01:54.477910 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-31 05:01:54.477921 | orchestrator | Tuesday 31 March 2026 05:01:39 +0000 (0:00:00.136) 0:27:12.121 ********* 2026-03-31 05:01:54.477933 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-03-31 05:01:54.477944 | orchestrator | 2026-03-31 05:01:54.477956 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-31 05:01:54.477967 | orchestrator | Tuesday 31 March 2026 05:01:39 +0000 (0:00:00.478) 0:27:12.599 ********* 2026-03-31 05:01:54.477978 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:54.477989 | orchestrator | 2026-03-31 05:01:54.478000 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-31 05:01:54.478092 | orchestrator | Tuesday 31 March 2026 05:01:40 +0000 (0:00:00.170) 0:27:12.770 ********* 2026-03-31 05:01:54.478107 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:54.478120 | orchestrator | 2026-03-31 05:01:54.478133 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-31 05:01:54.478145 | orchestrator | Tuesday 31 March 2026 05:01:40 +0000 (0:00:00.150) 0:27:12.920 ********* 2026-03-31 05:01:54.478158 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:54.478170 | orchestrator | 2026-03-31 05:01:54.478182 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-31 05:01:54.478195 | orchestrator | Tuesday 31 March 2026 05:01:40 +0000 (0:00:00.157) 0:27:13.078 ********* 2026-03-31 05:01:54.478208 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:54.478220 | orchestrator | 2026-03-31 05:01:54.478233 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-31 05:01:54.478246 | orchestrator | Tuesday 31 March 2026 05:01:40 +0000 (0:00:00.140) 0:27:13.219 ********* 2026-03-31 05:01:54.478259 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:54.478272 | orchestrator | 2026-03-31 05:01:54.478285 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-31 05:01:54.478297 | orchestrator | Tuesday 31 March 2026 05:01:40 +0000 (0:00:00.146) 0:27:13.365 ********* 2026-03-31 05:01:54.478310 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:54.478323 | orchestrator | 2026-03-31 05:01:54.478336 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-31 05:01:54.478347 | orchestrator | Tuesday 31 March 2026 05:01:40 +0000 (0:00:00.149) 0:27:13.515 ********* 2026-03-31 05:01:54.478358 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:54.478369 | orchestrator | 2026-03-31 05:01:54.478380 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-31 05:01:54.478391 | orchestrator | Tuesday 31 March 2026 05:01:40 +0000 (0:00:00.150) 0:27:13.665 ********* 2026-03-31 05:01:54.478402 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:54.478413 | orchestrator | 2026-03-31 05:01:54.478438 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-31 05:01:54.478449 | orchestrator | Tuesday 31 March 2026 05:01:41 +0000 (0:00:00.159) 0:27:13.825 ********* 2026-03-31 05:01:54.478460 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:01:54.478471 | orchestrator | 2026-03-31 05:01:54.478482 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-31 05:01:54.478493 | orchestrator | Tuesday 31 March 2026 05:01:41 +0000 (0:00:00.254) 0:27:14.079 ********* 2026-03-31 05:01:54.478504 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-03-31 05:01:54.478516 | orchestrator | 2026-03-31 05:01:54.478527 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-31 05:01:54.478538 | orchestrator | Tuesday 31 March 2026 05:01:41 +0000 (0:00:00.202) 0:27:14.282 ********* 2026-03-31 05:01:54.478549 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-03-31 05:01:54.478561 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-31 05:01:54.478572 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-31 05:01:54.478603 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-31 05:01:54.478616 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-31 05:01:54.478627 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-31 05:01:54.478637 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-31 05:01:54.478648 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-31 05:01:54.478659 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-31 05:01:54.478699 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-31 05:01:54.478711 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-31 05:01:54.478742 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-31 05:01:54.478765 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-31 05:01:54.478777 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-31 05:01:54.478787 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-03-31 05:01:54.478798 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-03-31 05:01:54.478809 | orchestrator | 2026-03-31 05:01:54.478820 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-31 05:01:54.478831 | orchestrator | Tuesday 31 March 2026 05:01:47 +0000 (0:00:05.710) 0:27:19.992 ********* 2026-03-31 05:01:54.478842 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-03-31 05:01:54.478853 | orchestrator | 2026-03-31 05:01:54.478864 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-31 05:01:54.478875 | orchestrator | Tuesday 31 March 2026 05:01:47 +0000 (0:00:00.229) 0:27:20.222 ********* 2026-03-31 05:01:54.478886 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-31 05:01:54.478899 | orchestrator | 2026-03-31 05:01:54.478910 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-31 05:01:54.478921 | orchestrator | Tuesday 31 March 2026 05:01:48 +0000 (0:00:00.497) 0:27:20.719 ********* 2026-03-31 05:01:54.478932 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-31 05:01:54.478943 | orchestrator | 2026-03-31 05:01:54.478954 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-31 05:01:54.478965 | orchestrator | Tuesday 31 March 2026 05:01:49 +0000 (0:00:00.968) 0:27:21.688 ********* 2026-03-31 05:01:54.478976 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:54.478987 | orchestrator | 2026-03-31 05:01:54.478998 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-31 05:01:54.479009 | orchestrator | Tuesday 31 March 2026 05:01:49 +0000 (0:00:00.148) 0:27:21.837 ********* 2026-03-31 05:01:54.479020 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:54.479031 | orchestrator | 2026-03-31 05:01:54.479042 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-31 05:01:54.479053 | orchestrator | Tuesday 31 March 2026 05:01:49 +0000 (0:00:00.138) 0:27:21.975 ********* 2026-03-31 05:01:54.479064 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:54.479075 | orchestrator | 2026-03-31 05:01:54.479086 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-31 05:01:54.479097 | orchestrator | Tuesday 31 March 2026 05:01:49 +0000 (0:00:00.141) 0:27:22.117 ********* 2026-03-31 05:01:54.479107 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:54.479118 | orchestrator | 2026-03-31 05:01:54.479130 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-31 05:01:54.479140 | orchestrator | Tuesday 31 March 2026 05:01:49 +0000 (0:00:00.135) 0:27:22.252 ********* 2026-03-31 05:01:54.479151 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:54.479162 | orchestrator | 2026-03-31 05:01:54.479173 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-31 05:01:54.479184 | orchestrator | Tuesday 31 March 2026 05:01:49 +0000 (0:00:00.131) 0:27:22.384 ********* 2026-03-31 05:01:54.479195 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:54.479206 | orchestrator | 2026-03-31 05:01:54.479217 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-31 05:01:54.479228 | orchestrator | Tuesday 31 March 2026 05:01:49 +0000 (0:00:00.141) 0:27:22.526 ********* 2026-03-31 05:01:54.479239 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:54.479250 | orchestrator | 2026-03-31 05:01:54.479261 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-31 05:01:54.479278 | orchestrator | Tuesday 31 March 2026 05:01:49 +0000 (0:00:00.143) 0:27:22.669 ********* 2026-03-31 05:01:54.479296 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:54.479307 | orchestrator | 2026-03-31 05:01:54.479318 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-31 05:01:54.479329 | orchestrator | Tuesday 31 March 2026 05:01:50 +0000 (0:00:00.150) 0:27:22.820 ********* 2026-03-31 05:01:54.479339 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:54.479350 | orchestrator | 2026-03-31 05:01:54.479361 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-31 05:01:54.479372 | orchestrator | Tuesday 31 March 2026 05:01:50 +0000 (0:00:00.503) 0:27:23.324 ********* 2026-03-31 05:01:54.479383 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:54.479394 | orchestrator | 2026-03-31 05:01:54.479405 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-31 05:01:54.479416 | orchestrator | Tuesday 31 March 2026 05:01:50 +0000 (0:00:00.132) 0:27:23.456 ********* 2026-03-31 05:01:54.479427 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:01:54.479438 | orchestrator | 2026-03-31 05:01:54.479449 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-31 05:01:54.479460 | orchestrator | Tuesday 31 March 2026 05:01:50 +0000 (0:00:00.144) 0:27:23.600 ********* 2026-03-31 05:01:54.479470 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-03-31 05:01:54.479481 | orchestrator | 2026-03-31 05:01:54.479492 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-31 05:01:54.479503 | orchestrator | Tuesday 31 March 2026 05:01:54 +0000 (0:00:03.355) 0:27:26.956 ********* 2026-03-31 05:01:54.479514 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-31 05:01:54.479525 | orchestrator | 2026-03-31 05:01:54.479542 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-31 05:02:16.724767 | orchestrator | Tuesday 31 March 2026 05:01:54 +0000 (0:00:00.189) 0:27:27.145 ********* 2026-03-31 05:02:16.724915 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-03-31 05:02:16.724998 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-03-31 05:02:16.725023 | orchestrator | 2026-03-31 05:02:16.725043 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-31 05:02:16.725062 | orchestrator | Tuesday 31 March 2026 05:01:58 +0000 (0:00:03.761) 0:27:30.907 ********* 2026-03-31 05:02:16.725081 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:02:16.725103 | orchestrator | 2026-03-31 05:02:16.725122 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-31 05:02:16.725140 | orchestrator | Tuesday 31 March 2026 05:01:58 +0000 (0:00:00.139) 0:27:31.047 ********* 2026-03-31 05:02:16.725160 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:02:16.725179 | orchestrator | 2026-03-31 05:02:16.725200 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-31 05:02:16.725223 | orchestrator | Tuesday 31 March 2026 05:01:58 +0000 (0:00:00.140) 0:27:31.188 ********* 2026-03-31 05:02:16.725243 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:02:16.725262 | orchestrator | 2026-03-31 05:02:16.725281 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-31 05:02:16.725300 | orchestrator | Tuesday 31 March 2026 05:01:58 +0000 (0:00:00.159) 0:27:31.347 ********* 2026-03-31 05:02:16.725349 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:02:16.725369 | orchestrator | 2026-03-31 05:02:16.725388 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-31 05:02:16.725407 | orchestrator | Tuesday 31 March 2026 05:01:58 +0000 (0:00:00.163) 0:27:31.510 ********* 2026-03-31 05:02:16.725426 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:02:16.725444 | orchestrator | 2026-03-31 05:02:16.725463 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-31 05:02:16.725482 | orchestrator | Tuesday 31 March 2026 05:01:58 +0000 (0:00:00.157) 0:27:31.668 ********* 2026-03-31 05:02:16.725501 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:02:16.725521 | orchestrator | 2026-03-31 05:02:16.725540 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-31 05:02:16.725593 | orchestrator | Tuesday 31 March 2026 05:01:59 +0000 (0:00:00.252) 0:27:31.920 ********* 2026-03-31 05:02:16.725616 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-31 05:02:16.725635 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-31 05:02:16.725654 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-31 05:02:16.725673 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:02:16.725692 | orchestrator | 2026-03-31 05:02:16.725711 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-31 05:02:16.725731 | orchestrator | Tuesday 31 March 2026 05:02:00 +0000 (0:00:00.792) 0:27:32.713 ********* 2026-03-31 05:02:16.725750 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-31 05:02:16.725770 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-31 05:02:16.725789 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-31 05:02:16.725829 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:02:16.725851 | orchestrator | 2026-03-31 05:02:16.725870 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-31 05:02:16.725890 | orchestrator | Tuesday 31 March 2026 05:02:00 +0000 (0:00:00.802) 0:27:33.515 ********* 2026-03-31 05:02:16.725909 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-31 05:02:16.725928 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-31 05:02:16.725947 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-31 05:02:16.725966 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:02:16.725985 | orchestrator | 2026-03-31 05:02:16.726005 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-31 05:02:16.726098 | orchestrator | Tuesday 31 March 2026 05:02:01 +0000 (0:00:01.118) 0:27:34.633 ********* 2026-03-31 05:02:16.726120 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:02:16.726140 | orchestrator | 2026-03-31 05:02:16.726159 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-31 05:02:16.726222 | orchestrator | Tuesday 31 March 2026 05:02:02 +0000 (0:00:00.173) 0:27:34.806 ********* 2026-03-31 05:02:16.726243 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-31 05:02:16.726256 | orchestrator | 2026-03-31 05:02:16.726266 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-31 05:02:16.726278 | orchestrator | Tuesday 31 March 2026 05:02:02 +0000 (0:00:00.430) 0:27:35.237 ********* 2026-03-31 05:02:16.726289 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:02:16.726300 | orchestrator | 2026-03-31 05:02:16.726311 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-31 05:02:16.726322 | orchestrator | Tuesday 31 March 2026 05:02:03 +0000 (0:00:00.836) 0:27:36.074 ********* 2026-03-31 05:02:16.726333 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-4 2026-03-31 05:02:16.726344 | orchestrator | 2026-03-31 05:02:16.726393 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-31 05:02:16.726418 | orchestrator | Tuesday 31 March 2026 05:02:03 +0000 (0:00:00.229) 0:27:36.304 ********* 2026-03-31 05:02:16.726437 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 05:02:16.726473 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-31 05:02:16.726493 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-31 05:02:16.726513 | orchestrator | 2026-03-31 05:02:16.726529 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-31 05:02:16.726548 | orchestrator | Tuesday 31 March 2026 05:02:05 +0000 (0:00:02.099) 0:27:38.404 ********* 2026-03-31 05:02:16.726594 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-03-31 05:02:16.726613 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-31 05:02:16.726633 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:02:16.726651 | orchestrator | 2026-03-31 05:02:16.726670 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-31 05:02:16.726688 | orchestrator | Tuesday 31 March 2026 05:02:06 +0000 (0:00:00.988) 0:27:39.392 ********* 2026-03-31 05:02:16.726706 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:02:16.726725 | orchestrator | 2026-03-31 05:02:16.726744 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-31 05:02:16.726761 | orchestrator | Tuesday 31 March 2026 05:02:06 +0000 (0:00:00.141) 0:27:39.534 ********* 2026-03-31 05:02:16.726777 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-4 2026-03-31 05:02:16.726795 | orchestrator | 2026-03-31 05:02:16.726811 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-31 05:02:16.726827 | orchestrator | Tuesday 31 March 2026 05:02:07 +0000 (0:00:00.192) 0:27:39.726 ********* 2026-03-31 05:02:16.726844 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-31 05:02:16.726863 | orchestrator | 2026-03-31 05:02:16.726875 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-31 05:02:16.726885 | orchestrator | Tuesday 31 March 2026 05:02:08 +0000 (0:00:01.223) 0:27:40.950 ********* 2026-03-31 05:02:16.726895 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 05:02:16.726905 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-31 05:02:16.726915 | orchestrator | 2026-03-31 05:02:16.726925 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-31 05:02:16.726935 | orchestrator | Tuesday 31 March 2026 05:02:12 +0000 (0:00:03.938) 0:27:44.889 ********* 2026-03-31 05:02:16.726945 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 05:02:16.726955 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-31 05:02:16.726965 | orchestrator | 2026-03-31 05:02:16.726975 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-31 05:02:16.726984 | orchestrator | Tuesday 31 March 2026 05:02:14 +0000 (0:00:02.044) 0:27:46.933 ********* 2026-03-31 05:02:16.726994 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-03-31 05:02:16.727004 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:02:16.727014 | orchestrator | 2026-03-31 05:02:16.727024 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-31 05:02:16.727034 | orchestrator | Tuesday 31 March 2026 05:02:15 +0000 (0:00:01.031) 0:27:47.965 ********* 2026-03-31 05:02:16.727044 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-4 2026-03-31 05:02:16.727053 | orchestrator | 2026-03-31 05:02:16.727063 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-31 05:02:16.727073 | orchestrator | Tuesday 31 March 2026 05:02:15 +0000 (0:00:00.241) 0:27:48.206 ********* 2026-03-31 05:02:16.727092 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 05:02:16.727103 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 05:02:16.727121 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 05:02:16.727131 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 05:02:16.727141 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 05:02:16.727151 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:02:16.727161 | orchestrator | 2026-03-31 05:02:16.727171 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-31 05:02:16.727180 | orchestrator | Tuesday 31 March 2026 05:02:16 +0000 (0:00:00.616) 0:27:48.823 ********* 2026-03-31 05:02:16.727190 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 05:02:16.727200 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 05:02:16.727211 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 05:02:16.727231 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 05:02:58.780629 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 05:02:58.780760 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:02:58.780779 | orchestrator | 2026-03-31 05:02:58.780793 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-31 05:02:58.780807 | orchestrator | Tuesday 31 March 2026 05:02:16 +0000 (0:00:00.567) 0:27:49.391 ********* 2026-03-31 05:02:58.780819 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-31 05:02:58.780832 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-31 05:02:58.780843 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-31 05:02:58.780855 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-31 05:02:58.780867 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-31 05:02:58.780879 | orchestrator | 2026-03-31 05:02:58.780890 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-31 05:02:58.780902 | orchestrator | Tuesday 31 March 2026 05:02:45 +0000 (0:00:29.035) 0:28:18.426 ********* 2026-03-31 05:02:58.780913 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:02:58.780925 | orchestrator | 2026-03-31 05:02:58.780936 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-31 05:02:58.780947 | orchestrator | Tuesday 31 March 2026 05:02:45 +0000 (0:00:00.127) 0:28:18.553 ********* 2026-03-31 05:02:58.780959 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:02:58.780970 | orchestrator | 2026-03-31 05:02:58.780981 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-31 05:02:58.781020 | orchestrator | Tuesday 31 March 2026 05:02:45 +0000 (0:00:00.118) 0:28:18.672 ********* 2026-03-31 05:02:58.781032 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-4 2026-03-31 05:02:58.781044 | orchestrator | 2026-03-31 05:02:58.781056 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-31 05:02:58.781093 | orchestrator | Tuesday 31 March 2026 05:02:46 +0000 (0:00:00.220) 0:28:18.892 ********* 2026-03-31 05:02:58.781107 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-4 2026-03-31 05:02:58.781121 | orchestrator | 2026-03-31 05:02:58.781134 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-31 05:02:58.781148 | orchestrator | Tuesday 31 March 2026 05:02:46 +0000 (0:00:00.524) 0:28:19.417 ********* 2026-03-31 05:02:58.781161 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:02:58.781174 | orchestrator | 2026-03-31 05:02:58.781188 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-31 05:02:58.781207 | orchestrator | Tuesday 31 March 2026 05:02:47 +0000 (0:00:01.046) 0:28:20.463 ********* 2026-03-31 05:02:58.781227 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:02:58.781240 | orchestrator | 2026-03-31 05:02:58.781252 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-31 05:02:58.781265 | orchestrator | Tuesday 31 March 2026 05:02:48 +0000 (0:00:00.928) 0:28:21.392 ********* 2026-03-31 05:02:58.781278 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:02:58.781290 | orchestrator | 2026-03-31 05:02:58.781319 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-31 05:02:58.781333 | orchestrator | Tuesday 31 March 2026 05:02:49 +0000 (0:00:01.246) 0:28:22.638 ********* 2026-03-31 05:02:58.781346 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-31 05:02:58.781359 | orchestrator | 2026-03-31 05:02:58.781371 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-03-31 05:02:58.781385 | orchestrator | 2026-03-31 05:02:58.781397 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-31 05:02:58.781409 | orchestrator | Tuesday 31 March 2026 05:02:52 +0000 (0:00:02.098) 0:28:24.737 ********* 2026-03-31 05:02:58.781420 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-03-31 05:02:58.781431 | orchestrator | 2026-03-31 05:02:58.781442 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-31 05:02:58.781453 | orchestrator | Tuesday 31 March 2026 05:02:52 +0000 (0:00:00.248) 0:28:24.985 ********* 2026-03-31 05:02:58.781464 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:02:58.781476 | orchestrator | 2026-03-31 05:02:58.781487 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-31 05:02:58.781498 | orchestrator | Tuesday 31 March 2026 05:02:52 +0000 (0:00:00.448) 0:28:25.434 ********* 2026-03-31 05:02:58.781509 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:02:58.781545 | orchestrator | 2026-03-31 05:02:58.781557 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-31 05:02:58.781568 | orchestrator | Tuesday 31 March 2026 05:02:52 +0000 (0:00:00.124) 0:28:25.559 ********* 2026-03-31 05:02:58.781579 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:02:58.781590 | orchestrator | 2026-03-31 05:02:58.781602 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-31 05:02:58.781613 | orchestrator | Tuesday 31 March 2026 05:02:53 +0000 (0:00:00.447) 0:28:26.006 ********* 2026-03-31 05:02:58.781624 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:02:58.781635 | orchestrator | 2026-03-31 05:02:58.781667 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-31 05:02:58.781679 | orchestrator | Tuesday 31 March 2026 05:02:53 +0000 (0:00:00.430) 0:28:26.437 ********* 2026-03-31 05:02:58.781690 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:02:58.781701 | orchestrator | 2026-03-31 05:02:58.781712 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-31 05:02:58.781723 | orchestrator | Tuesday 31 March 2026 05:02:53 +0000 (0:00:00.154) 0:28:26.591 ********* 2026-03-31 05:02:58.781734 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:02:58.781745 | orchestrator | 2026-03-31 05:02:58.781757 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-31 05:02:58.781778 | orchestrator | Tuesday 31 March 2026 05:02:54 +0000 (0:00:00.152) 0:28:26.743 ********* 2026-03-31 05:02:58.781790 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:02:58.781801 | orchestrator | 2026-03-31 05:02:58.781811 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-31 05:02:58.781823 | orchestrator | Tuesday 31 March 2026 05:02:54 +0000 (0:00:00.169) 0:28:26.912 ********* 2026-03-31 05:02:58.781834 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:02:58.781845 | orchestrator | 2026-03-31 05:02:58.781856 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-31 05:02:58.781867 | orchestrator | Tuesday 31 March 2026 05:02:54 +0000 (0:00:00.139) 0:28:27.052 ********* 2026-03-31 05:02:58.781878 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 05:02:58.781889 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 05:02:58.781899 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 05:02:58.781911 | orchestrator | 2026-03-31 05:02:58.781922 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-31 05:02:58.781933 | orchestrator | Tuesday 31 March 2026 05:02:55 +0000 (0:00:00.667) 0:28:27.719 ********* 2026-03-31 05:02:58.781944 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:02:58.781955 | orchestrator | 2026-03-31 05:02:58.781966 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-31 05:02:58.781977 | orchestrator | Tuesday 31 March 2026 05:02:55 +0000 (0:00:00.290) 0:28:28.010 ********* 2026-03-31 05:02:58.781988 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 05:02:58.781998 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 05:02:58.782010 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 05:02:58.782077 | orchestrator | 2026-03-31 05:02:58.782089 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-31 05:02:58.782100 | orchestrator | Tuesday 31 March 2026 05:02:57 +0000 (0:00:01.851) 0:28:29.861 ********* 2026-03-31 05:02:58.782111 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-31 05:02:58.782122 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-31 05:02:58.782133 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-31 05:02:58.782144 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:02:58.782155 | orchestrator | 2026-03-31 05:02:58.782166 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-31 05:02:58.782213 | orchestrator | Tuesday 31 March 2026 05:02:57 +0000 (0:00:00.426) 0:28:30.288 ********* 2026-03-31 05:02:58.782226 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-31 05:02:58.782247 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-31 05:02:58.782259 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-31 05:02:58.782271 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:02:58.782282 | orchestrator | 2026-03-31 05:02:58.782293 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-31 05:02:58.782304 | orchestrator | Tuesday 31 March 2026 05:02:58 +0000 (0:00:00.980) 0:28:31.268 ********* 2026-03-31 05:02:58.782318 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 05:02:58.782349 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 05:03:03.166953 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-31 05:03:03.167057 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:03.167075 | orchestrator | 2026-03-31 05:03:03.167088 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-31 05:03:03.167101 | orchestrator | Tuesday 31 March 2026 05:02:58 +0000 (0:00:00.182) 0:28:31.450 ********* 2026-03-31 05:03:03.167115 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '2a470704af4f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-31 05:02:55.853242', 'end': '2026-03-31 05:02:55.899342', 'delta': '0:00:00.046100', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2a470704af4f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-31 05:03:03.167129 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '72281537ffe8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-31 05:02:56.418815', 'end': '2026-03-31 05:02:56.466886', 'delta': '0:00:00.048071', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['72281537ffe8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-31 05:03:03.167160 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '4f3969f3506a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-31 05:02:56.969649', 'end': '2026-03-31 05:02:57.031394', 'delta': '0:00:00.061745', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['4f3969f3506a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-31 05:03:03.167173 | orchestrator | 2026-03-31 05:03:03.167184 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-31 05:03:03.167219 | orchestrator | Tuesday 31 March 2026 05:02:58 +0000 (0:00:00.191) 0:28:31.642 ********* 2026-03-31 05:03:03.167231 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:03:03.167244 | orchestrator | 2026-03-31 05:03:03.167255 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-31 05:03:03.167266 | orchestrator | Tuesday 31 March 2026 05:02:59 +0000 (0:00:00.280) 0:28:31.922 ********* 2026-03-31 05:03:03.167277 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:03.167288 | orchestrator | 2026-03-31 05:03:03.167299 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-31 05:03:03.167310 | orchestrator | Tuesday 31 March 2026 05:03:00 +0000 (0:00:01.047) 0:28:32.969 ********* 2026-03-31 05:03:03.167321 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:03:03.167332 | orchestrator | 2026-03-31 05:03:03.167344 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-31 05:03:03.167355 | orchestrator | Tuesday 31 March 2026 05:03:00 +0000 (0:00:00.175) 0:28:33.144 ********* 2026-03-31 05:03:03.167366 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-31 05:03:03.167377 | orchestrator | 2026-03-31 05:03:03.167388 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-31 05:03:03.167400 | orchestrator | Tuesday 31 March 2026 05:03:01 +0000 (0:00:00.919) 0:28:34.064 ********* 2026-03-31 05:03:03.167410 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:03:03.167421 | orchestrator | 2026-03-31 05:03:03.167432 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-31 05:03:03.167444 | orchestrator | Tuesday 31 March 2026 05:03:01 +0000 (0:00:00.153) 0:28:34.217 ********* 2026-03-31 05:03:03.167473 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:03.167487 | orchestrator | 2026-03-31 05:03:03.167500 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-31 05:03:03.167548 | orchestrator | Tuesday 31 March 2026 05:03:01 +0000 (0:00:00.139) 0:28:34.357 ********* 2026-03-31 05:03:03.167569 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:03.167589 | orchestrator | 2026-03-31 05:03:03.167609 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-31 05:03:03.167624 | orchestrator | Tuesday 31 March 2026 05:03:01 +0000 (0:00:00.230) 0:28:34.587 ********* 2026-03-31 05:03:03.167637 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:03.167650 | orchestrator | 2026-03-31 05:03:03.167664 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-31 05:03:03.167676 | orchestrator | Tuesday 31 March 2026 05:03:02 +0000 (0:00:00.124) 0:28:34.712 ********* 2026-03-31 05:03:03.167690 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:03.167702 | orchestrator | 2026-03-31 05:03:03.167716 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-31 05:03:03.167728 | orchestrator | Tuesday 31 March 2026 05:03:02 +0000 (0:00:00.127) 0:28:34.839 ********* 2026-03-31 05:03:03.167741 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:03:03.167754 | orchestrator | 2026-03-31 05:03:03.167768 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-31 05:03:03.167780 | orchestrator | Tuesday 31 March 2026 05:03:02 +0000 (0:00:00.164) 0:28:35.004 ********* 2026-03-31 05:03:03.167794 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:03.167807 | orchestrator | 2026-03-31 05:03:03.167819 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-31 05:03:03.167830 | orchestrator | Tuesday 31 March 2026 05:03:02 +0000 (0:00:00.141) 0:28:35.145 ********* 2026-03-31 05:03:03.167841 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:03:03.167852 | orchestrator | 2026-03-31 05:03:03.167864 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-31 05:03:03.167875 | orchestrator | Tuesday 31 March 2026 05:03:02 +0000 (0:00:00.165) 0:28:35.310 ********* 2026-03-31 05:03:03.167887 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:03.167898 | orchestrator | 2026-03-31 05:03:03.167909 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-31 05:03:03.167933 | orchestrator | Tuesday 31 March 2026 05:03:02 +0000 (0:00:00.138) 0:28:35.448 ********* 2026-03-31 05:03:03.167945 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:03:03.167956 | orchestrator | 2026-03-31 05:03:03.167967 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-31 05:03:03.167978 | orchestrator | Tuesday 31 March 2026 05:03:02 +0000 (0:00:00.164) 0:28:35.613 ********* 2026-03-31 05:03:03.167990 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 05:03:03.168010 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--185c377e--da3e--5428--98db--747be321d2f9-osd--block--185c377e--da3e--5428--98db--747be321d2f9', 'dm-uuid-LVM-x16wR0JSkJwOUat6KB2RjtOnd6k2ruBp3Senp6or7C3BHvrbv8KuFHdSdmwvdICC'], 'uuids': ['4a48fb33-b599-4c4d-a815-d018d343a3ff'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0036be6c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['3Senp6-or7C-3BHv-rbv8-KuFH-dSdm-wvdICC']}})  2026-03-31 05:03:03.168023 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1382055-b12a-4a0d-90b0-6b0bf5b2002d', 'scsi-SQEMU_QEMU_HARDDISK_d1382055-b12a-4a0d-90b0-6b0bf5b2002d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd1382055', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-31 05:03:03.168045 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-bwm83I-k31i-pwme-XT9I-9Z0g-1hP0-CwgXOd', 'scsi-0QEMU_QEMU_HARDDISK_cee620fc-9fd6-4c5e-b237-9b955e0088ae', 'scsi-SQEMU_QEMU_HARDDISK_cee620fc-9fd6-4c5e-b237-9b955e0088ae'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'cee620fc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--07ced279--a583--5107--8220--95f80fc10ac7-osd--block--07ced279--a583--5107--8220--95f80fc10ac7']}})  2026-03-31 05:03:03.606720 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 05:03:03.606827 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 05:03:03.606870 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-31-01-38-44-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-31 05:03:03.606886 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 05:03:03.606949 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-yKTWsV-enR4-4CqY-2klB-eRO2-fR5A-XJ6GI1', 'dm-uuid-CRYPT-LUKS2-74b5eafc2cf149539043240c66b113f2-yKTWsV-enR4-4CqY-2klB-eRO2-fR5A-XJ6GI1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-31 05:03:03.606964 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 05:03:03.606977 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--07ced279--a583--5107--8220--95f80fc10ac7-osd--block--07ced279--a583--5107--8220--95f80fc10ac7', 'dm-uuid-LVM-4Lb9QdMZv1ai74sfHiNB7SWQCThlMxSwyKTWsVenR44CqY2klBeRO2fR5AXJ6GI1'], 'uuids': ['74b5eafc-2cf1-4953-9043-240c66b113f2'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'cee620fc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['yKTWsV-enR4-4CqY-2klB-eRO2-fR5A-XJ6GI1']}})  2026-03-31 05:03:03.607012 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-zgTsa4-r5F1-H4rU-9oqC-nOys-qaba-d4ei1Y', 'scsi-0QEMU_QEMU_HARDDISK_0036be6c-41d0-4a1c-804a-c8bed222bda7', 'scsi-SQEMU_QEMU_HARDDISK_0036be6c-41d0-4a1c-804a-c8bed222bda7'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0036be6c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--185c377e--da3e--5428--98db--747be321d2f9-osd--block--185c377e--da3e--5428--98db--747be321d2f9']}})  2026-03-31 05:03:03.607026 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 05:03:03.607055 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f91d726b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part16', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part14', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part15', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part1', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-31 05:03:03.607072 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 05:03:03.607087 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-31 05:03:03.607109 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-3Senp6-or7C-3BHv-rbv8-KuFH-dSdm-wvdICC', 'dm-uuid-CRYPT-LUKS2-4a48fb33b5994c4da815d018d343a3ff-3Senp6-or7C-3BHv-rbv8-KuFH-dSdm-wvdICC'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-31 05:03:03.826372 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:03.826484 | orchestrator | 2026-03-31 05:03:03.826501 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-31 05:03:03.826542 | orchestrator | Tuesday 31 March 2026 05:03:03 +0000 (0:00:00.665) 0:28:36.278 ********* 2026-03-31 05:03:03.826557 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 05:03:03.827362 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--185c377e--da3e--5428--98db--747be321d2f9-osd--block--185c377e--da3e--5428--98db--747be321d2f9', 'dm-uuid-LVM-x16wR0JSkJwOUat6KB2RjtOnd6k2ruBp3Senp6or7C3BHvrbv8KuFHdSdmwvdICC'], 'uuids': ['4a48fb33-b599-4c4d-a815-d018d343a3ff'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0036be6c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['3Senp6-or7C-3BHv-rbv8-KuFH-dSdm-wvdICC']}}, 'ansible_loop_var': 'item'})  2026-03-31 05:03:03.827461 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1382055-b12a-4a0d-90b0-6b0bf5b2002d', 'scsi-SQEMU_QEMU_HARDDISK_d1382055-b12a-4a0d-90b0-6b0bf5b2002d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd1382055', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 05:03:03.827488 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-bwm83I-k31i-pwme-XT9I-9Z0g-1hP0-CwgXOd', 'scsi-0QEMU_QEMU_HARDDISK_cee620fc-9fd6-4c5e-b237-9b955e0088ae', 'scsi-SQEMU_QEMU_HARDDISK_cee620fc-9fd6-4c5e-b237-9b955e0088ae'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'cee620fc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--07ced279--a583--5107--8220--95f80fc10ac7-osd--block--07ced279--a583--5107--8220--95f80fc10ac7']}}, 'ansible_loop_var': 'item'})  2026-03-31 05:03:03.827567 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 05:03:03.827582 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 05:03:03.827608 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-31-01-38-44-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 05:03:03.827620 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 05:03:03.827637 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-yKTWsV-enR4-4CqY-2klB-eRO2-fR5A-XJ6GI1', 'dm-uuid-CRYPT-LUKS2-74b5eafc2cf149539043240c66b113f2-yKTWsV-enR4-4CqY-2klB-eRO2-fR5A-XJ6GI1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 05:03:03.827647 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 05:03:03.827666 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--07ced279--a583--5107--8220--95f80fc10ac7-osd--block--07ced279--a583--5107--8220--95f80fc10ac7', 'dm-uuid-LVM-4Lb9QdMZv1ai74sfHiNB7SWQCThlMxSwyKTWsVenR44CqY2klBeRO2fR5AXJ6GI1'], 'uuids': ['74b5eafc-2cf1-4953-9043-240c66b113f2'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'cee620fc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['yKTWsV-enR4-4CqY-2klB-eRO2-fR5A-XJ6GI1']}}, 'ansible_loop_var': 'item'})  2026-03-31 05:03:06.962122 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-zgTsa4-r5F1-H4rU-9oqC-nOys-qaba-d4ei1Y', 'scsi-0QEMU_QEMU_HARDDISK_0036be6c-41d0-4a1c-804a-c8bed222bda7', 'scsi-SQEMU_QEMU_HARDDISK_0036be6c-41d0-4a1c-804a-c8bed222bda7'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0036be6c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--185c377e--da3e--5428--98db--747be321d2f9-osd--block--185c377e--da3e--5428--98db--747be321d2f9']}}, 'ansible_loop_var': 'item'})  2026-03-31 05:03:06.962222 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 05:03:06.962257 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f91d726b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part16', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part14', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part15', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part1', 'scsi-SQEMU_QEMU_HARDDISK_f91d726b-9268-46b5-b001-d0963ab9d126-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 05:03:06.962308 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 05:03:06.962320 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 05:03:06.962330 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-3Senp6-or7C-3BHv-rbv8-KuFH-dSdm-wvdICC', 'dm-uuid-CRYPT-LUKS2-4a48fb33b5994c4da815d018d343a3ff-3Senp6-or7C-3BHv-rbv8-KuFH-dSdm-wvdICC'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-31 05:03:06.962341 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:06.962353 | orchestrator | 2026-03-31 05:03:06.962363 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-31 05:03:06.962378 | orchestrator | Tuesday 31 March 2026 05:03:03 +0000 (0:00:00.395) 0:28:36.674 ********* 2026-03-31 05:03:06.962388 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:03:06.962398 | orchestrator | 2026-03-31 05:03:06.962407 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-31 05:03:06.962416 | orchestrator | Tuesday 31 March 2026 05:03:04 +0000 (0:00:00.492) 0:28:37.166 ********* 2026-03-31 05:03:06.962425 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:03:06.962433 | orchestrator | 2026-03-31 05:03:06.962442 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-31 05:03:06.962451 | orchestrator | Tuesday 31 March 2026 05:03:04 +0000 (0:00:00.131) 0:28:37.298 ********* 2026-03-31 05:03:06.962460 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:03:06.962469 | orchestrator | 2026-03-31 05:03:06.962478 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-31 05:03:06.962487 | orchestrator | Tuesday 31 March 2026 05:03:05 +0000 (0:00:00.441) 0:28:37.739 ********* 2026-03-31 05:03:06.962496 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:06.962553 | orchestrator | 2026-03-31 05:03:06.962563 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-31 05:03:06.962572 | orchestrator | Tuesday 31 March 2026 05:03:05 +0000 (0:00:00.122) 0:28:37.862 ********* 2026-03-31 05:03:06.962581 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:06.962590 | orchestrator | 2026-03-31 05:03:06.962599 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-31 05:03:06.962607 | orchestrator | Tuesday 31 March 2026 05:03:05 +0000 (0:00:00.235) 0:28:38.098 ********* 2026-03-31 05:03:06.962624 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:06.962635 | orchestrator | 2026-03-31 05:03:06.962646 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-31 05:03:06.962656 | orchestrator | Tuesday 31 March 2026 05:03:05 +0000 (0:00:00.173) 0:28:38.271 ********* 2026-03-31 05:03:06.962666 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-31 05:03:06.962677 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-31 05:03:06.962687 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-31 05:03:06.962697 | orchestrator | 2026-03-31 05:03:06.962707 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-31 05:03:06.962718 | orchestrator | Tuesday 31 March 2026 05:03:06 +0000 (0:00:00.964) 0:28:39.235 ********* 2026-03-31 05:03:06.962728 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-31 05:03:06.962740 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-31 05:03:06.962751 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-31 05:03:06.962760 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:06.962768 | orchestrator | 2026-03-31 05:03:06.962777 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-31 05:03:06.962786 | orchestrator | Tuesday 31 March 2026 05:03:06 +0000 (0:00:00.158) 0:28:39.394 ********* 2026-03-31 05:03:06.962795 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-03-31 05:03:06.962805 | orchestrator | 2026-03-31 05:03:06.962820 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-31 05:03:21.734606 | orchestrator | Tuesday 31 March 2026 05:03:06 +0000 (0:00:00.240) 0:28:39.634 ********* 2026-03-31 05:03:21.734730 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:21.734748 | orchestrator | 2026-03-31 05:03:21.734762 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-31 05:03:21.734774 | orchestrator | Tuesday 31 March 2026 05:03:07 +0000 (0:00:00.154) 0:28:39.789 ********* 2026-03-31 05:03:21.734785 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:21.734796 | orchestrator | 2026-03-31 05:03:21.734808 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-31 05:03:21.734819 | orchestrator | Tuesday 31 March 2026 05:03:07 +0000 (0:00:00.473) 0:28:40.262 ********* 2026-03-31 05:03:21.734830 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:21.734841 | orchestrator | 2026-03-31 05:03:21.734852 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-31 05:03:21.734863 | orchestrator | Tuesday 31 March 2026 05:03:07 +0000 (0:00:00.151) 0:28:40.414 ********* 2026-03-31 05:03:21.734875 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:03:21.734887 | orchestrator | 2026-03-31 05:03:21.734898 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-31 05:03:21.734909 | orchestrator | Tuesday 31 March 2026 05:03:07 +0000 (0:00:00.249) 0:28:40.664 ********* 2026-03-31 05:03:21.734920 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-31 05:03:21.734931 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-31 05:03:21.734942 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-31 05:03:21.734953 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:21.734964 | orchestrator | 2026-03-31 05:03:21.734976 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-31 05:03:21.734987 | orchestrator | Tuesday 31 March 2026 05:03:08 +0000 (0:00:00.395) 0:28:41.060 ********* 2026-03-31 05:03:21.734998 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-31 05:03:21.735009 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-31 05:03:21.735020 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-31 05:03:21.735031 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:21.735062 | orchestrator | 2026-03-31 05:03:21.735074 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-31 05:03:21.735085 | orchestrator | Tuesday 31 March 2026 05:03:08 +0000 (0:00:00.414) 0:28:41.474 ********* 2026-03-31 05:03:21.735096 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-31 05:03:21.735107 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-31 05:03:21.735118 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-31 05:03:21.735128 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:21.735139 | orchestrator | 2026-03-31 05:03:21.735165 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-31 05:03:21.735176 | orchestrator | Tuesday 31 March 2026 05:03:09 +0000 (0:00:00.394) 0:28:41.869 ********* 2026-03-31 05:03:21.735187 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:03:21.735198 | orchestrator | 2026-03-31 05:03:21.735209 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-31 05:03:21.735220 | orchestrator | Tuesday 31 March 2026 05:03:09 +0000 (0:00:00.154) 0:28:42.023 ********* 2026-03-31 05:03:21.735231 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-31 05:03:21.735241 | orchestrator | 2026-03-31 05:03:21.735252 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-31 05:03:21.735264 | orchestrator | Tuesday 31 March 2026 05:03:09 +0000 (0:00:00.340) 0:28:42.364 ********* 2026-03-31 05:03:21.735275 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 05:03:21.735287 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 05:03:21.735297 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 05:03:21.735308 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-31 05:03:21.735319 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-31 05:03:21.735330 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-03-31 05:03:21.735341 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-31 05:03:21.735352 | orchestrator | 2026-03-31 05:03:21.735363 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-31 05:03:21.735373 | orchestrator | Tuesday 31 March 2026 05:03:10 +0000 (0:00:01.136) 0:28:43.501 ********* 2026-03-31 05:03:21.735384 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-31 05:03:21.735395 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-31 05:03:21.735406 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-31 05:03:21.735417 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-31 05:03:21.735428 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-31 05:03:21.735439 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-03-31 05:03:21.735450 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-31 05:03:21.735461 | orchestrator | 2026-03-31 05:03:21.735472 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-03-31 05:03:21.735482 | orchestrator | Tuesday 31 March 2026 05:03:12 +0000 (0:00:01.675) 0:28:45.177 ********* 2026-03-31 05:03:21.735522 | orchestrator | changed: [testbed-node-5] 2026-03-31 05:03:21.735534 | orchestrator | 2026-03-31 05:03:21.735564 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-03-31 05:03:21.735576 | orchestrator | Tuesday 31 March 2026 05:03:13 +0000 (0:00:01.201) 0:28:46.378 ********* 2026-03-31 05:03:21.735588 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-31 05:03:21.735600 | orchestrator | 2026-03-31 05:03:21.735622 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-03-31 05:03:21.735633 | orchestrator | Tuesday 31 March 2026 05:03:15 +0000 (0:00:01.847) 0:28:48.226 ********* 2026-03-31 05:03:21.735644 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-31 05:03:21.735656 | orchestrator | 2026-03-31 05:03:21.735667 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-31 05:03:21.735679 | orchestrator | Tuesday 31 March 2026 05:03:17 +0000 (0:00:01.534) 0:28:49.760 ********* 2026-03-31 05:03:21.735689 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-03-31 05:03:21.735700 | orchestrator | 2026-03-31 05:03:21.735711 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-31 05:03:21.735722 | orchestrator | Tuesday 31 March 2026 05:03:17 +0000 (0:00:00.195) 0:28:49.955 ********* 2026-03-31 05:03:21.735733 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-03-31 05:03:21.735744 | orchestrator | 2026-03-31 05:03:21.735755 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-31 05:03:21.735766 | orchestrator | Tuesday 31 March 2026 05:03:17 +0000 (0:00:00.198) 0:28:50.154 ********* 2026-03-31 05:03:21.735776 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:21.735787 | orchestrator | 2026-03-31 05:03:21.735798 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-31 05:03:21.735809 | orchestrator | Tuesday 31 March 2026 05:03:17 +0000 (0:00:00.136) 0:28:50.290 ********* 2026-03-31 05:03:21.735820 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:03:21.735831 | orchestrator | 2026-03-31 05:03:21.735841 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-31 05:03:21.735852 | orchestrator | Tuesday 31 March 2026 05:03:18 +0000 (0:00:00.527) 0:28:50.818 ********* 2026-03-31 05:03:21.735863 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:03:21.735874 | orchestrator | 2026-03-31 05:03:21.735885 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-31 05:03:21.735896 | orchestrator | Tuesday 31 March 2026 05:03:18 +0000 (0:00:00.502) 0:28:51.320 ********* 2026-03-31 05:03:21.735907 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:03:21.735918 | orchestrator | 2026-03-31 05:03:21.735932 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-31 05:03:21.735958 | orchestrator | Tuesday 31 March 2026 05:03:19 +0000 (0:00:00.509) 0:28:51.830 ********* 2026-03-31 05:03:21.735977 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:21.735995 | orchestrator | 2026-03-31 05:03:21.736007 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-31 05:03:21.736018 | orchestrator | Tuesday 31 March 2026 05:03:19 +0000 (0:00:00.125) 0:28:51.955 ********* 2026-03-31 05:03:21.736029 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:21.736040 | orchestrator | 2026-03-31 05:03:21.736051 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-31 05:03:21.736062 | orchestrator | Tuesday 31 March 2026 05:03:19 +0000 (0:00:00.130) 0:28:52.086 ********* 2026-03-31 05:03:21.736073 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:21.736084 | orchestrator | 2026-03-31 05:03:21.736095 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-31 05:03:21.736106 | orchestrator | Tuesday 31 March 2026 05:03:19 +0000 (0:00:00.138) 0:28:52.224 ********* 2026-03-31 05:03:21.736117 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:03:21.736128 | orchestrator | 2026-03-31 05:03:21.736139 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-31 05:03:21.736150 | orchestrator | Tuesday 31 March 2026 05:03:20 +0000 (0:00:00.520) 0:28:52.745 ********* 2026-03-31 05:03:21.736161 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:03:21.736172 | orchestrator | 2026-03-31 05:03:21.736183 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-31 05:03:21.736202 | orchestrator | Tuesday 31 March 2026 05:03:20 +0000 (0:00:00.798) 0:28:53.543 ********* 2026-03-31 05:03:21.736213 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:21.736224 | orchestrator | 2026-03-31 05:03:21.736235 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-31 05:03:21.736246 | orchestrator | Tuesday 31 March 2026 05:03:20 +0000 (0:00:00.113) 0:28:53.656 ********* 2026-03-31 05:03:21.736257 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:21.736268 | orchestrator | 2026-03-31 05:03:21.736280 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-31 05:03:21.736291 | orchestrator | Tuesday 31 March 2026 05:03:21 +0000 (0:00:00.135) 0:28:53.792 ********* 2026-03-31 05:03:21.736302 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:03:21.736313 | orchestrator | 2026-03-31 05:03:21.736324 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-31 05:03:21.736335 | orchestrator | Tuesday 31 March 2026 05:03:21 +0000 (0:00:00.150) 0:28:53.942 ********* 2026-03-31 05:03:21.736346 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:03:21.736357 | orchestrator | 2026-03-31 05:03:21.736368 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-31 05:03:21.736379 | orchestrator | Tuesday 31 March 2026 05:03:21 +0000 (0:00:00.169) 0:28:54.111 ********* 2026-03-31 05:03:21.736390 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:03:21.736401 | orchestrator | 2026-03-31 05:03:21.736412 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-31 05:03:21.736423 | orchestrator | Tuesday 31 March 2026 05:03:21 +0000 (0:00:00.154) 0:28:54.266 ********* 2026-03-31 05:03:21.736434 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:21.736445 | orchestrator | 2026-03-31 05:03:21.736463 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-31 05:03:33.209161 | orchestrator | Tuesday 31 March 2026 05:03:21 +0000 (0:00:00.134) 0:28:54.401 ********* 2026-03-31 05:03:33.209292 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:33.209309 | orchestrator | 2026-03-31 05:03:33.209321 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-31 05:03:33.209332 | orchestrator | Tuesday 31 March 2026 05:03:21 +0000 (0:00:00.132) 0:28:54.533 ********* 2026-03-31 05:03:33.209343 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:33.209353 | orchestrator | 2026-03-31 05:03:33.209363 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-31 05:03:33.209374 | orchestrator | Tuesday 31 March 2026 05:03:21 +0000 (0:00:00.134) 0:28:54.667 ********* 2026-03-31 05:03:33.209384 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:03:33.209395 | orchestrator | 2026-03-31 05:03:33.209405 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-31 05:03:33.209415 | orchestrator | Tuesday 31 March 2026 05:03:22 +0000 (0:00:00.176) 0:28:54.844 ********* 2026-03-31 05:03:33.209425 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:03:33.209435 | orchestrator | 2026-03-31 05:03:33.209445 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-31 05:03:33.209455 | orchestrator | Tuesday 31 March 2026 05:03:22 +0000 (0:00:00.222) 0:28:55.067 ********* 2026-03-31 05:03:33.209466 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:33.209475 | orchestrator | 2026-03-31 05:03:33.209562 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-31 05:03:33.209573 | orchestrator | Tuesday 31 March 2026 05:03:22 +0000 (0:00:00.129) 0:28:55.197 ********* 2026-03-31 05:03:33.209583 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:33.209593 | orchestrator | 2026-03-31 05:03:33.209603 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-31 05:03:33.209613 | orchestrator | Tuesday 31 March 2026 05:03:22 +0000 (0:00:00.128) 0:28:55.325 ********* 2026-03-31 05:03:33.209623 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:33.209637 | orchestrator | 2026-03-31 05:03:33.209655 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-31 05:03:33.209704 | orchestrator | Tuesday 31 March 2026 05:03:23 +0000 (0:00:00.457) 0:28:55.783 ********* 2026-03-31 05:03:33.209723 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:33.209740 | orchestrator | 2026-03-31 05:03:33.209756 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-31 05:03:33.209773 | orchestrator | Tuesday 31 March 2026 05:03:23 +0000 (0:00:00.124) 0:28:55.907 ********* 2026-03-31 05:03:33.209790 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:33.209808 | orchestrator | 2026-03-31 05:03:33.209824 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-31 05:03:33.209841 | orchestrator | Tuesday 31 March 2026 05:03:23 +0000 (0:00:00.120) 0:28:56.028 ********* 2026-03-31 05:03:33.209858 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:33.209874 | orchestrator | 2026-03-31 05:03:33.209910 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-31 05:03:33.209929 | orchestrator | Tuesday 31 March 2026 05:03:23 +0000 (0:00:00.131) 0:28:56.159 ********* 2026-03-31 05:03:33.209945 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:33.209964 | orchestrator | 2026-03-31 05:03:33.209982 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-31 05:03:33.210001 | orchestrator | Tuesday 31 March 2026 05:03:23 +0000 (0:00:00.126) 0:28:56.286 ********* 2026-03-31 05:03:33.210095 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:33.210116 | orchestrator | 2026-03-31 05:03:33.210129 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-31 05:03:33.210182 | orchestrator | Tuesday 31 March 2026 05:03:23 +0000 (0:00:00.122) 0:28:56.408 ********* 2026-03-31 05:03:33.210197 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:33.210212 | orchestrator | 2026-03-31 05:03:33.210226 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-31 05:03:33.210239 | orchestrator | Tuesday 31 March 2026 05:03:23 +0000 (0:00:00.141) 0:28:56.550 ********* 2026-03-31 05:03:33.210252 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:33.210265 | orchestrator | 2026-03-31 05:03:33.210277 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-31 05:03:33.210291 | orchestrator | Tuesday 31 March 2026 05:03:24 +0000 (0:00:00.133) 0:28:56.684 ********* 2026-03-31 05:03:33.210305 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:33.210319 | orchestrator | 2026-03-31 05:03:33.210332 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-31 05:03:33.210346 | orchestrator | Tuesday 31 March 2026 05:03:24 +0000 (0:00:00.136) 0:28:56.821 ********* 2026-03-31 05:03:33.210359 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:33.210374 | orchestrator | 2026-03-31 05:03:33.210388 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-31 05:03:33.210403 | orchestrator | Tuesday 31 March 2026 05:03:24 +0000 (0:00:00.198) 0:28:57.020 ********* 2026-03-31 05:03:33.210418 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:03:33.210433 | orchestrator | 2026-03-31 05:03:33.210449 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-31 05:03:33.210465 | orchestrator | Tuesday 31 March 2026 05:03:25 +0000 (0:00:00.964) 0:28:57.985 ********* 2026-03-31 05:03:33.210509 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:03:33.210526 | orchestrator | 2026-03-31 05:03:33.210541 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-31 05:03:33.210555 | orchestrator | Tuesday 31 March 2026 05:03:26 +0000 (0:00:01.146) 0:28:59.131 ********* 2026-03-31 05:03:33.210570 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-03-31 05:03:33.210587 | orchestrator | 2026-03-31 05:03:33.210602 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-31 05:03:33.210615 | orchestrator | Tuesday 31 March 2026 05:03:26 +0000 (0:00:00.513) 0:28:59.645 ********* 2026-03-31 05:03:33.210630 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:33.210645 | orchestrator | 2026-03-31 05:03:33.210660 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-31 05:03:33.210722 | orchestrator | Tuesday 31 March 2026 05:03:27 +0000 (0:00:00.147) 0:28:59.793 ********* 2026-03-31 05:03:33.210740 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:33.210755 | orchestrator | 2026-03-31 05:03:33.210770 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-31 05:03:33.210785 | orchestrator | Tuesday 31 March 2026 05:03:27 +0000 (0:00:00.165) 0:28:59.958 ********* 2026-03-31 05:03:33.210800 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-31 05:03:33.210815 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-31 05:03:33.210830 | orchestrator | 2026-03-31 05:03:33.210845 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-31 05:03:33.210860 | orchestrator | Tuesday 31 March 2026 05:03:28 +0000 (0:00:00.806) 0:29:00.765 ********* 2026-03-31 05:03:33.210875 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:03:33.210890 | orchestrator | 2026-03-31 05:03:33.210905 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-31 05:03:33.210919 | orchestrator | Tuesday 31 March 2026 05:03:28 +0000 (0:00:00.459) 0:29:01.224 ********* 2026-03-31 05:03:33.210934 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:33.210949 | orchestrator | 2026-03-31 05:03:33.210963 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-31 05:03:33.210977 | orchestrator | Tuesday 31 March 2026 05:03:28 +0000 (0:00:00.162) 0:29:01.387 ********* 2026-03-31 05:03:33.210992 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:33.211007 | orchestrator | 2026-03-31 05:03:33.211022 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-31 05:03:33.211036 | orchestrator | Tuesday 31 March 2026 05:03:28 +0000 (0:00:00.151) 0:29:01.539 ********* 2026-03-31 05:03:33.211051 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:33.211070 | orchestrator | 2026-03-31 05:03:33.211084 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-31 05:03:33.211099 | orchestrator | Tuesday 31 March 2026 05:03:28 +0000 (0:00:00.124) 0:29:01.664 ********* 2026-03-31 05:03:33.211114 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-03-31 05:03:33.211129 | orchestrator | 2026-03-31 05:03:33.211144 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-31 05:03:33.211158 | orchestrator | Tuesday 31 March 2026 05:03:29 +0000 (0:00:00.247) 0:29:01.911 ********* 2026-03-31 05:03:33.211173 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:03:33.211188 | orchestrator | 2026-03-31 05:03:33.211203 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-31 05:03:33.211217 | orchestrator | Tuesday 31 March 2026 05:03:29 +0000 (0:00:00.665) 0:29:02.577 ********* 2026-03-31 05:03:33.211242 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-31 05:03:33.211259 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-31 05:03:33.211274 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-31 05:03:33.211288 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:33.211303 | orchestrator | 2026-03-31 05:03:33.211318 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-31 05:03:33.211332 | orchestrator | Tuesday 31 March 2026 05:03:30 +0000 (0:00:00.176) 0:29:02.754 ********* 2026-03-31 05:03:33.211347 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:33.211360 | orchestrator | 2026-03-31 05:03:33.211374 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-31 05:03:33.211386 | orchestrator | Tuesday 31 March 2026 05:03:30 +0000 (0:00:00.492) 0:29:03.247 ********* 2026-03-31 05:03:33.211399 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:33.211412 | orchestrator | 2026-03-31 05:03:33.211425 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-31 05:03:33.211451 | orchestrator | Tuesday 31 March 2026 05:03:30 +0000 (0:00:00.175) 0:29:03.422 ********* 2026-03-31 05:03:33.211464 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:33.211478 | orchestrator | 2026-03-31 05:03:33.211516 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-31 05:03:33.211524 | orchestrator | Tuesday 31 March 2026 05:03:30 +0000 (0:00:00.169) 0:29:03.592 ********* 2026-03-31 05:03:33.211532 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:33.211540 | orchestrator | 2026-03-31 05:03:33.211548 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-31 05:03:33.211556 | orchestrator | Tuesday 31 March 2026 05:03:31 +0000 (0:00:00.142) 0:29:03.735 ********* 2026-03-31 05:03:33.211564 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:33.211572 | orchestrator | 2026-03-31 05:03:33.211580 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-31 05:03:33.211588 | orchestrator | Tuesday 31 March 2026 05:03:31 +0000 (0:00:00.166) 0:29:03.901 ********* 2026-03-31 05:03:33.211596 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:03:33.211603 | orchestrator | 2026-03-31 05:03:33.211611 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-31 05:03:33.211619 | orchestrator | Tuesday 31 March 2026 05:03:32 +0000 (0:00:01.458) 0:29:05.359 ********* 2026-03-31 05:03:33.211627 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:03:33.211635 | orchestrator | 2026-03-31 05:03:33.211643 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-31 05:03:33.211651 | orchestrator | Tuesday 31 March 2026 05:03:32 +0000 (0:00:00.156) 0:29:05.516 ********* 2026-03-31 05:03:33.211659 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-03-31 05:03:33.211667 | orchestrator | 2026-03-31 05:03:33.211675 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-31 05:03:33.211683 | orchestrator | Tuesday 31 March 2026 05:03:33 +0000 (0:00:00.216) 0:29:05.732 ********* 2026-03-31 05:03:33.211691 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:33.211699 | orchestrator | 2026-03-31 05:03:33.211707 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-31 05:03:33.211725 | orchestrator | Tuesday 31 March 2026 05:03:33 +0000 (0:00:00.147) 0:29:05.880 ********* 2026-03-31 05:03:52.185081 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:52.185189 | orchestrator | 2026-03-31 05:03:52.185204 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-31 05:03:52.185215 | orchestrator | Tuesday 31 March 2026 05:03:33 +0000 (0:00:00.145) 0:29:06.026 ********* 2026-03-31 05:03:52.185224 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:52.185233 | orchestrator | 2026-03-31 05:03:52.185243 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-31 05:03:52.185252 | orchestrator | Tuesday 31 March 2026 05:03:33 +0000 (0:00:00.166) 0:29:06.193 ********* 2026-03-31 05:03:52.185261 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:52.185270 | orchestrator | 2026-03-31 05:03:52.185279 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-31 05:03:52.185288 | orchestrator | Tuesday 31 March 2026 05:03:33 +0000 (0:00:00.146) 0:29:06.340 ********* 2026-03-31 05:03:52.185297 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:52.185306 | orchestrator | 2026-03-31 05:03:52.185315 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-31 05:03:52.185324 | orchestrator | Tuesday 31 March 2026 05:03:34 +0000 (0:00:00.448) 0:29:06.788 ********* 2026-03-31 05:03:52.185333 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:52.185342 | orchestrator | 2026-03-31 05:03:52.185351 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-31 05:03:52.185371 | orchestrator | Tuesday 31 March 2026 05:03:34 +0000 (0:00:00.143) 0:29:06.931 ********* 2026-03-31 05:03:52.185381 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:52.185390 | orchestrator | 2026-03-31 05:03:52.185423 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-31 05:03:52.185433 | orchestrator | Tuesday 31 March 2026 05:03:34 +0000 (0:00:00.153) 0:29:07.085 ********* 2026-03-31 05:03:52.185441 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:52.185450 | orchestrator | 2026-03-31 05:03:52.185460 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-31 05:03:52.185524 | orchestrator | Tuesday 31 March 2026 05:03:34 +0000 (0:00:00.153) 0:29:07.238 ********* 2026-03-31 05:03:52.185534 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:03:52.185543 | orchestrator | 2026-03-31 05:03:52.185552 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-31 05:03:52.185561 | orchestrator | Tuesday 31 March 2026 05:03:34 +0000 (0:00:00.234) 0:29:07.472 ********* 2026-03-31 05:03:52.185570 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-03-31 05:03:52.185580 | orchestrator | 2026-03-31 05:03:52.185589 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-31 05:03:52.185610 | orchestrator | Tuesday 31 March 2026 05:03:34 +0000 (0:00:00.190) 0:29:07.663 ********* 2026-03-31 05:03:52.185620 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-03-31 05:03:52.185631 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-31 05:03:52.185641 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-31 05:03:52.185651 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-31 05:03:52.185662 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-31 05:03:52.185671 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-31 05:03:52.185681 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-31 05:03:52.185692 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-31 05:03:52.185702 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-31 05:03:52.185712 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-31 05:03:52.185722 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-31 05:03:52.185732 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-31 05:03:52.185742 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-31 05:03:52.185754 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-31 05:03:52.185764 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-03-31 05:03:52.185774 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-03-31 05:03:52.185784 | orchestrator | 2026-03-31 05:03:52.185795 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-31 05:03:52.185805 | orchestrator | Tuesday 31 March 2026 05:03:40 +0000 (0:00:05.316) 0:29:12.979 ********* 2026-03-31 05:03:52.185816 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-03-31 05:03:52.185826 | orchestrator | 2026-03-31 05:03:52.185836 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-31 05:03:52.185846 | orchestrator | Tuesday 31 March 2026 05:03:40 +0000 (0:00:00.215) 0:29:13.195 ********* 2026-03-31 05:03:52.185857 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-31 05:03:52.185867 | orchestrator | 2026-03-31 05:03:52.185876 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-31 05:03:52.185884 | orchestrator | Tuesday 31 March 2026 05:03:40 +0000 (0:00:00.477) 0:29:13.673 ********* 2026-03-31 05:03:52.185893 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-31 05:03:52.185902 | orchestrator | 2026-03-31 05:03:52.185911 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-31 05:03:52.185919 | orchestrator | Tuesday 31 March 2026 05:03:41 +0000 (0:00:00.941) 0:29:14.614 ********* 2026-03-31 05:03:52.185970 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:52.185980 | orchestrator | 2026-03-31 05:03:52.185989 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-31 05:03:52.186089 | orchestrator | Tuesday 31 March 2026 05:03:42 +0000 (0:00:00.138) 0:29:14.753 ********* 2026-03-31 05:03:52.186103 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:52.186113 | orchestrator | 2026-03-31 05:03:52.186122 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-31 05:03:52.186130 | orchestrator | Tuesday 31 March 2026 05:03:42 +0000 (0:00:00.466) 0:29:15.220 ********* 2026-03-31 05:03:52.186139 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:52.186148 | orchestrator | 2026-03-31 05:03:52.186157 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-31 05:03:52.186166 | orchestrator | Tuesday 31 March 2026 05:03:42 +0000 (0:00:00.141) 0:29:15.361 ********* 2026-03-31 05:03:52.186175 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:52.186184 | orchestrator | 2026-03-31 05:03:52.186193 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-31 05:03:52.186202 | orchestrator | Tuesday 31 March 2026 05:03:42 +0000 (0:00:00.123) 0:29:15.485 ********* 2026-03-31 05:03:52.186211 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:52.186220 | orchestrator | 2026-03-31 05:03:52.186229 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-31 05:03:52.186238 | orchestrator | Tuesday 31 March 2026 05:03:42 +0000 (0:00:00.147) 0:29:15.633 ********* 2026-03-31 05:03:52.186247 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:52.186256 | orchestrator | 2026-03-31 05:03:52.186265 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-31 05:03:52.186274 | orchestrator | Tuesday 31 March 2026 05:03:43 +0000 (0:00:00.140) 0:29:15.773 ********* 2026-03-31 05:03:52.186283 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:52.186291 | orchestrator | 2026-03-31 05:03:52.186300 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-31 05:03:52.186309 | orchestrator | Tuesday 31 March 2026 05:03:43 +0000 (0:00:00.122) 0:29:15.896 ********* 2026-03-31 05:03:52.186318 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:52.186327 | orchestrator | 2026-03-31 05:03:52.186336 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-31 05:03:52.186345 | orchestrator | Tuesday 31 March 2026 05:03:43 +0000 (0:00:00.135) 0:29:16.032 ********* 2026-03-31 05:03:52.186354 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:52.186362 | orchestrator | 2026-03-31 05:03:52.186372 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-31 05:03:52.186380 | orchestrator | Tuesday 31 March 2026 05:03:43 +0000 (0:00:00.124) 0:29:16.157 ********* 2026-03-31 05:03:52.186389 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:52.186398 | orchestrator | 2026-03-31 05:03:52.186407 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-31 05:03:52.186421 | orchestrator | Tuesday 31 March 2026 05:03:43 +0000 (0:00:00.114) 0:29:16.272 ********* 2026-03-31 05:03:52.186430 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:52.186439 | orchestrator | 2026-03-31 05:03:52.186448 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-31 05:03:52.186457 | orchestrator | Tuesday 31 March 2026 05:03:43 +0000 (0:00:00.150) 0:29:16.423 ********* 2026-03-31 05:03:52.186484 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-03-31 05:03:52.186494 | orchestrator | 2026-03-31 05:03:52.186503 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-31 05:03:52.186512 | orchestrator | Tuesday 31 March 2026 05:03:47 +0000 (0:00:03.396) 0:29:19.819 ********* 2026-03-31 05:03:52.186521 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-31 05:03:52.186537 | orchestrator | 2026-03-31 05:03:52.186546 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-31 05:03:52.186555 | orchestrator | Tuesday 31 March 2026 05:03:47 +0000 (0:00:00.169) 0:29:19.988 ********* 2026-03-31 05:03:52.186566 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-03-31 05:03:52.186579 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-03-31 05:03:52.186589 | orchestrator | 2026-03-31 05:03:52.186599 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-31 05:03:52.186607 | orchestrator | Tuesday 31 March 2026 05:03:51 +0000 (0:00:04.123) 0:29:24.112 ********* 2026-03-31 05:03:52.186616 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:52.186625 | orchestrator | 2026-03-31 05:03:52.186634 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-31 05:03:52.186643 | orchestrator | Tuesday 31 March 2026 05:03:51 +0000 (0:00:00.448) 0:29:24.560 ********* 2026-03-31 05:03:52.186652 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:52.186661 | orchestrator | 2026-03-31 05:03:52.186670 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-31 05:03:52.186680 | orchestrator | Tuesday 31 March 2026 05:03:52 +0000 (0:00:00.128) 0:29:24.688 ********* 2026-03-31 05:03:52.186689 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:03:52.186698 | orchestrator | 2026-03-31 05:03:52.186707 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-31 05:03:52.186723 | orchestrator | Tuesday 31 March 2026 05:03:52 +0000 (0:00:00.163) 0:29:24.852 ********* 2026-03-31 05:04:38.499842 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:04:38.499934 | orchestrator | 2026-03-31 05:04:38.499945 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-31 05:04:38.499954 | orchestrator | Tuesday 31 March 2026 05:03:52 +0000 (0:00:00.162) 0:29:25.015 ********* 2026-03-31 05:04:38.499962 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:04:38.499969 | orchestrator | 2026-03-31 05:04:38.499977 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-31 05:04:38.499985 | orchestrator | Tuesday 31 March 2026 05:03:52 +0000 (0:00:00.168) 0:29:25.184 ********* 2026-03-31 05:04:38.499993 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:04:38.500001 | orchestrator | 2026-03-31 05:04:38.500008 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-31 05:04:38.500016 | orchestrator | Tuesday 31 March 2026 05:03:52 +0000 (0:00:00.258) 0:29:25.443 ********* 2026-03-31 05:04:38.500024 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-31 05:04:38.500031 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-31 05:04:38.500039 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-31 05:04:38.500046 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:04:38.500053 | orchestrator | 2026-03-31 05:04:38.500061 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-31 05:04:38.500068 | orchestrator | Tuesday 31 March 2026 05:03:53 +0000 (0:00:00.442) 0:29:25.886 ********* 2026-03-31 05:04:38.500075 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-31 05:04:38.500083 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-31 05:04:38.500090 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-31 05:04:38.500120 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:04:38.500127 | orchestrator | 2026-03-31 05:04:38.500135 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-31 05:04:38.500142 | orchestrator | Tuesday 31 March 2026 05:03:53 +0000 (0:00:00.415) 0:29:26.301 ********* 2026-03-31 05:04:38.500150 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-31 05:04:38.500157 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-31 05:04:38.500165 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-31 05:04:38.500172 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:04:38.500179 | orchestrator | 2026-03-31 05:04:38.500186 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-31 05:04:38.500194 | orchestrator | Tuesday 31 March 2026 05:03:54 +0000 (0:00:00.441) 0:29:26.743 ********* 2026-03-31 05:04:38.500213 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:04:38.500220 | orchestrator | 2026-03-31 05:04:38.500228 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-31 05:04:38.500235 | orchestrator | Tuesday 31 March 2026 05:03:54 +0000 (0:00:00.169) 0:29:26.912 ********* 2026-03-31 05:04:38.500242 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-31 05:04:38.500250 | orchestrator | 2026-03-31 05:04:38.500257 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-31 05:04:38.500264 | orchestrator | Tuesday 31 March 2026 05:03:54 +0000 (0:00:00.439) 0:29:27.352 ********* 2026-03-31 05:04:38.500271 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:04:38.500279 | orchestrator | 2026-03-31 05:04:38.500286 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-31 05:04:38.500293 | orchestrator | Tuesday 31 March 2026 05:03:55 +0000 (0:00:01.144) 0:29:28.497 ********* 2026-03-31 05:04:38.500301 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-5 2026-03-31 05:04:38.500308 | orchestrator | 2026-03-31 05:04:38.500315 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-31 05:04:38.500323 | orchestrator | Tuesday 31 March 2026 05:03:56 +0000 (0:00:00.242) 0:29:28.739 ********* 2026-03-31 05:04:38.500330 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 05:04:38.500337 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-31 05:04:38.500345 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-31 05:04:38.500353 | orchestrator | 2026-03-31 05:04:38.500363 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-31 05:04:38.500371 | orchestrator | Tuesday 31 March 2026 05:03:58 +0000 (0:00:02.094) 0:29:30.833 ********* 2026-03-31 05:04:38.500380 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-03-31 05:04:38.500388 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-31 05:04:38.500396 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:04:38.500405 | orchestrator | 2026-03-31 05:04:38.500413 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-31 05:04:38.500422 | orchestrator | Tuesday 31 March 2026 05:03:59 +0000 (0:00:00.943) 0:29:31.777 ********* 2026-03-31 05:04:38.500469 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:04:38.500478 | orchestrator | 2026-03-31 05:04:38.500487 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-31 05:04:38.500496 | orchestrator | Tuesday 31 March 2026 05:03:59 +0000 (0:00:00.161) 0:29:31.939 ********* 2026-03-31 05:04:38.500504 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-5 2026-03-31 05:04:38.500514 | orchestrator | 2026-03-31 05:04:38.500522 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-31 05:04:38.500531 | orchestrator | Tuesday 31 March 2026 05:03:59 +0000 (0:00:00.215) 0:29:32.154 ********* 2026-03-31 05:04:38.500540 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-31 05:04:38.500556 | orchestrator | 2026-03-31 05:04:38.500565 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-31 05:04:38.500574 | orchestrator | Tuesday 31 March 2026 05:04:00 +0000 (0:00:00.617) 0:29:32.772 ********* 2026-03-31 05:04:38.500595 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 05:04:38.500606 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-31 05:04:38.500614 | orchestrator | 2026-03-31 05:04:38.500623 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-31 05:04:38.500631 | orchestrator | Tuesday 31 March 2026 05:04:04 +0000 (0:00:04.000) 0:29:36.772 ********* 2026-03-31 05:04:38.500640 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-31 05:04:38.500648 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-31 05:04:38.500657 | orchestrator | 2026-03-31 05:04:38.500665 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-31 05:04:38.500674 | orchestrator | Tuesday 31 March 2026 05:04:06 +0000 (0:00:01.984) 0:29:38.756 ********* 2026-03-31 05:04:38.500683 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-03-31 05:04:38.500690 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:04:38.500697 | orchestrator | 2026-03-31 05:04:38.500704 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-31 05:04:38.500711 | orchestrator | Tuesday 31 March 2026 05:04:07 +0000 (0:00:00.962) 0:29:39.719 ********* 2026-03-31 05:04:38.500719 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-5 2026-03-31 05:04:38.500726 | orchestrator | 2026-03-31 05:04:38.500733 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-31 05:04:38.500740 | orchestrator | Tuesday 31 March 2026 05:04:07 +0000 (0:00:00.548) 0:29:40.268 ********* 2026-03-31 05:04:38.500747 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 05:04:38.500755 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 05:04:38.500762 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 05:04:38.500769 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 05:04:38.500780 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 05:04:38.500788 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:04:38.500795 | orchestrator | 2026-03-31 05:04:38.500802 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-31 05:04:38.500809 | orchestrator | Tuesday 31 March 2026 05:04:08 +0000 (0:00:00.622) 0:29:40.890 ********* 2026-03-31 05:04:38.500817 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 05:04:38.500824 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 05:04:38.500831 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 05:04:38.500838 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 05:04:38.500846 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-31 05:04:38.500853 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:04:38.500865 | orchestrator | 2026-03-31 05:04:38.500873 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-31 05:04:38.500880 | orchestrator | Tuesday 31 March 2026 05:04:08 +0000 (0:00:00.611) 0:29:41.502 ********* 2026-03-31 05:04:38.500887 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-31 05:04:38.500895 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-31 05:04:38.500902 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-31 05:04:38.500909 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-31 05:04:38.500918 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-31 05:04:38.500925 | orchestrator | 2026-03-31 05:04:38.500932 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-31 05:04:38.500939 | orchestrator | Tuesday 31 March 2026 05:04:38 +0000 (0:00:29.536) 0:30:11.039 ********* 2026-03-31 05:04:38.500947 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:04:38.500954 | orchestrator | 2026-03-31 05:04:38.500961 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-31 05:04:38.500973 | orchestrator | Tuesday 31 March 2026 05:04:38 +0000 (0:00:00.128) 0:30:11.168 ********* 2026-03-31 05:05:04.251128 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:05:04.251251 | orchestrator | 2026-03-31 05:05:04.251277 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-31 05:05:04.251299 | orchestrator | Tuesday 31 March 2026 05:04:38 +0000 (0:00:00.131) 0:30:11.299 ********* 2026-03-31 05:05:04.251320 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-5 2026-03-31 05:05:04.251339 | orchestrator | 2026-03-31 05:05:04.251355 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-31 05:05:04.251366 | orchestrator | Tuesday 31 March 2026 05:04:38 +0000 (0:00:00.227) 0:30:11.527 ********* 2026-03-31 05:05:04.251378 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-5 2026-03-31 05:05:04.251389 | orchestrator | 2026-03-31 05:05:04.251400 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-31 05:05:04.251465 | orchestrator | Tuesday 31 March 2026 05:04:39 +0000 (0:00:00.188) 0:30:11.716 ********* 2026-03-31 05:05:04.251478 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:05:04.251490 | orchestrator | 2026-03-31 05:05:04.251501 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-31 05:05:04.251513 | orchestrator | Tuesday 31 March 2026 05:04:40 +0000 (0:00:01.074) 0:30:12.791 ********* 2026-03-31 05:05:04.251524 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:05:04.251535 | orchestrator | 2026-03-31 05:05:04.251547 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-31 05:05:04.251559 | orchestrator | Tuesday 31 March 2026 05:04:41 +0000 (0:00:00.934) 0:30:13.725 ********* 2026-03-31 05:05:04.251571 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:05:04.251583 | orchestrator | 2026-03-31 05:05:04.251594 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-31 05:05:04.251606 | orchestrator | Tuesday 31 March 2026 05:04:42 +0000 (0:00:01.155) 0:30:14.880 ********* 2026-03-31 05:05:04.251618 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-31 05:05:04.251630 | orchestrator | 2026-03-31 05:05:04.251642 | orchestrator | PLAY [Upgrade ceph rbd mirror node] ******************************************** 2026-03-31 05:05:04.251682 | orchestrator | skipping: no hosts matched 2026-03-31 05:05:04.251697 | orchestrator | 2026-03-31 05:05:04.251711 | orchestrator | PLAY [Upgrade ceph nfs node] *************************************************** 2026-03-31 05:05:04.251724 | orchestrator | skipping: no hosts matched 2026-03-31 05:05:04.251737 | orchestrator | 2026-03-31 05:05:04.251750 | orchestrator | PLAY [Upgrade ceph client node] ************************************************ 2026-03-31 05:05:04.251776 | orchestrator | skipping: no hosts matched 2026-03-31 05:05:04.251790 | orchestrator | 2026-03-31 05:05:04.251804 | orchestrator | PLAY [Upgrade ceph-crash daemons] ********************************************** 2026-03-31 05:05:04.251816 | orchestrator | 2026-03-31 05:05:04.251830 | orchestrator | TASK [Stop the ceph-crash service] ********************************************* 2026-03-31 05:05:04.251843 | orchestrator | Tuesday 31 March 2026 05:04:44 +0000 (0:00:02.150) 0:30:17.031 ********* 2026-03-31 05:05:04.251856 | orchestrator | changed: [testbed-node-0] 2026-03-31 05:05:04.251869 | orchestrator | changed: [testbed-node-1] 2026-03-31 05:05:04.251881 | orchestrator | changed: [testbed-node-3] 2026-03-31 05:05:04.251894 | orchestrator | changed: [testbed-node-2] 2026-03-31 05:05:04.251907 | orchestrator | changed: [testbed-node-5] 2026-03-31 05:05:04.251920 | orchestrator | changed: [testbed-node-4] 2026-03-31 05:05:04.251933 | orchestrator | 2026-03-31 05:05:04.251946 | orchestrator | TASK [Mask and disable the ceph-crash service] ********************************* 2026-03-31 05:05:04.251959 | orchestrator | Tuesday 31 March 2026 05:04:45 +0000 (0:00:01.589) 0:30:18.621 ********* 2026-03-31 05:05:04.251971 | orchestrator | changed: [testbed-node-0] 2026-03-31 05:05:04.251985 | orchestrator | changed: [testbed-node-3] 2026-03-31 05:05:04.251999 | orchestrator | changed: [testbed-node-4] 2026-03-31 05:05:04.252011 | orchestrator | changed: [testbed-node-1] 2026-03-31 05:05:04.252024 | orchestrator | changed: [testbed-node-2] 2026-03-31 05:05:04.252037 | orchestrator | changed: [testbed-node-5] 2026-03-31 05:05:04.252049 | orchestrator | 2026-03-31 05:05:04.252060 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-31 05:05:04.252071 | orchestrator | Tuesday 31 March 2026 05:04:48 +0000 (0:00:02.438) 0:30:21.060 ********* 2026-03-31 05:05:04.252083 | orchestrator | ok: [testbed-node-0] 2026-03-31 05:05:04.252094 | orchestrator | ok: [testbed-node-2] 2026-03-31 05:05:04.252104 | orchestrator | ok: [testbed-node-1] 2026-03-31 05:05:04.252115 | orchestrator | ok: [testbed-node-3] 2026-03-31 05:05:04.252126 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:05:04.252137 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:05:04.252148 | orchestrator | 2026-03-31 05:05:04.252159 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-31 05:05:04.252170 | orchestrator | Tuesday 31 March 2026 05:04:49 +0000 (0:00:00.958) 0:30:22.018 ********* 2026-03-31 05:05:04.252181 | orchestrator | ok: [testbed-node-0] 2026-03-31 05:05:04.252192 | orchestrator | ok: [testbed-node-1] 2026-03-31 05:05:04.252203 | orchestrator | ok: [testbed-node-2] 2026-03-31 05:05:04.252214 | orchestrator | ok: [testbed-node-3] 2026-03-31 05:05:04.252225 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:05:04.252236 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:05:04.252247 | orchestrator | 2026-03-31 05:05:04.252258 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-31 05:05:04.252269 | orchestrator | Tuesday 31 March 2026 05:04:50 +0000 (0:00:01.397) 0:30:23.416 ********* 2026-03-31 05:05:04.252281 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 05:05:04.252294 | orchestrator | 2026-03-31 05:05:04.252305 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-31 05:05:04.252316 | orchestrator | Tuesday 31 March 2026 05:04:52 +0000 (0:00:01.341) 0:30:24.757 ********* 2026-03-31 05:05:04.252327 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 05:05:04.252338 | orchestrator | 2026-03-31 05:05:04.252379 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-31 05:05:04.252391 | orchestrator | Tuesday 31 March 2026 05:04:53 +0000 (0:00:01.355) 0:30:26.113 ********* 2026-03-31 05:05:04.252402 | orchestrator | skipping: [testbed-node-3] 2026-03-31 05:05:04.252435 | orchestrator | ok: [testbed-node-0] 2026-03-31 05:05:04.252446 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:05:04.252457 | orchestrator | ok: [testbed-node-1] 2026-03-31 05:05:04.252468 | orchestrator | ok: [testbed-node-2] 2026-03-31 05:05:04.252479 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:05:04.252490 | orchestrator | 2026-03-31 05:05:04.252502 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-31 05:05:04.252519 | orchestrator | Tuesday 31 March 2026 05:04:54 +0000 (0:00:00.941) 0:30:27.055 ********* 2026-03-31 05:05:04.252539 | orchestrator | skipping: [testbed-node-0] 2026-03-31 05:05:04.252556 | orchestrator | skipping: [testbed-node-1] 2026-03-31 05:05:04.252574 | orchestrator | skipping: [testbed-node-2] 2026-03-31 05:05:04.252591 | orchestrator | ok: [testbed-node-3] 2026-03-31 05:05:04.252608 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:05:04.252619 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:05:04.252630 | orchestrator | 2026-03-31 05:05:04.252642 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-31 05:05:04.252653 | orchestrator | Tuesday 31 March 2026 05:04:55 +0000 (0:00:01.047) 0:30:28.102 ********* 2026-03-31 05:05:04.252664 | orchestrator | skipping: [testbed-node-0] 2026-03-31 05:05:04.252675 | orchestrator | skipping: [testbed-node-1] 2026-03-31 05:05:04.252686 | orchestrator | skipping: [testbed-node-2] 2026-03-31 05:05:04.252697 | orchestrator | ok: [testbed-node-3] 2026-03-31 05:05:04.252708 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:05:04.252719 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:05:04.252730 | orchestrator | 2026-03-31 05:05:04.252741 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-31 05:05:04.252752 | orchestrator | Tuesday 31 March 2026 05:04:56 +0000 (0:00:01.026) 0:30:29.129 ********* 2026-03-31 05:05:04.252763 | orchestrator | skipping: [testbed-node-0] 2026-03-31 05:05:04.252775 | orchestrator | skipping: [testbed-node-1] 2026-03-31 05:05:04.252786 | orchestrator | skipping: [testbed-node-2] 2026-03-31 05:05:04.252797 | orchestrator | ok: [testbed-node-3] 2026-03-31 05:05:04.252808 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:05:04.252819 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:05:04.252829 | orchestrator | 2026-03-31 05:05:04.252842 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-31 05:05:04.252861 | orchestrator | Tuesday 31 March 2026 05:04:57 +0000 (0:00:01.364) 0:30:30.493 ********* 2026-03-31 05:05:04.252877 | orchestrator | skipping: [testbed-node-3] 2026-03-31 05:05:04.252905 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:05:04.252924 | orchestrator | ok: [testbed-node-0] 2026-03-31 05:05:04.252942 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:05:04.252970 | orchestrator | ok: [testbed-node-1] 2026-03-31 05:05:04.252986 | orchestrator | ok: [testbed-node-2] 2026-03-31 05:05:04.253005 | orchestrator | 2026-03-31 05:05:04.253022 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-31 05:05:04.253040 | orchestrator | Tuesday 31 March 2026 05:04:58 +0000 (0:00:00.788) 0:30:31.282 ********* 2026-03-31 05:05:04.253059 | orchestrator | skipping: [testbed-node-0] 2026-03-31 05:05:04.253078 | orchestrator | skipping: [testbed-node-1] 2026-03-31 05:05:04.253097 | orchestrator | skipping: [testbed-node-2] 2026-03-31 05:05:04.253115 | orchestrator | skipping: [testbed-node-3] 2026-03-31 05:05:04.253127 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:05:04.253138 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:05:04.253149 | orchestrator | 2026-03-31 05:05:04.253160 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-31 05:05:04.253171 | orchestrator | Tuesday 31 March 2026 05:04:59 +0000 (0:00:00.937) 0:30:32.219 ********* 2026-03-31 05:05:04.253182 | orchestrator | skipping: [testbed-node-0] 2026-03-31 05:05:04.253203 | orchestrator | skipping: [testbed-node-1] 2026-03-31 05:05:04.253214 | orchestrator | skipping: [testbed-node-2] 2026-03-31 05:05:04.253225 | orchestrator | skipping: [testbed-node-3] 2026-03-31 05:05:04.253236 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:05:04.253247 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:05:04.253258 | orchestrator | 2026-03-31 05:05:04.253269 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-31 05:05:04.253280 | orchestrator | Tuesday 31 March 2026 05:05:00 +0000 (0:00:00.653) 0:30:32.873 ********* 2026-03-31 05:05:04.253291 | orchestrator | ok: [testbed-node-0] 2026-03-31 05:05:04.253302 | orchestrator | ok: [testbed-node-1] 2026-03-31 05:05:04.253313 | orchestrator | ok: [testbed-node-2] 2026-03-31 05:05:04.253324 | orchestrator | ok: [testbed-node-3] 2026-03-31 05:05:04.253335 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:05:04.253346 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:05:04.253357 | orchestrator | 2026-03-31 05:05:04.253368 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-31 05:05:04.253379 | orchestrator | Tuesday 31 March 2026 05:05:01 +0000 (0:00:01.432) 0:30:34.305 ********* 2026-03-31 05:05:04.253390 | orchestrator | ok: [testbed-node-0] 2026-03-31 05:05:04.253401 | orchestrator | ok: [testbed-node-1] 2026-03-31 05:05:04.253444 | orchestrator | ok: [testbed-node-2] 2026-03-31 05:05:04.253464 | orchestrator | ok: [testbed-node-3] 2026-03-31 05:05:04.253483 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:05:04.253499 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:05:04.253511 | orchestrator | 2026-03-31 05:05:04.253522 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-31 05:05:04.253533 | orchestrator | Tuesday 31 March 2026 05:05:02 +0000 (0:00:01.089) 0:30:35.394 ********* 2026-03-31 05:05:04.253545 | orchestrator | skipping: [testbed-node-0] 2026-03-31 05:05:04.253556 | orchestrator | skipping: [testbed-node-1] 2026-03-31 05:05:04.253567 | orchestrator | skipping: [testbed-node-2] 2026-03-31 05:05:04.253578 | orchestrator | skipping: [testbed-node-3] 2026-03-31 05:05:04.253589 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:05:04.253600 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:05:04.253611 | orchestrator | 2026-03-31 05:05:04.253622 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-31 05:05:04.253633 | orchestrator | Tuesday 31 March 2026 05:05:03 +0000 (0:00:00.916) 0:30:36.311 ********* 2026-03-31 05:05:04.253644 | orchestrator | ok: [testbed-node-0] 2026-03-31 05:05:04.253655 | orchestrator | ok: [testbed-node-1] 2026-03-31 05:05:04.253666 | orchestrator | ok: [testbed-node-2] 2026-03-31 05:05:04.253677 | orchestrator | skipping: [testbed-node-3] 2026-03-31 05:05:04.253689 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:05:04.253700 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:05:04.253713 | orchestrator | 2026-03-31 05:05:04.253746 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-31 05:05:33.797428 | orchestrator | Tuesday 31 March 2026 05:05:04 +0000 (0:00:00.605) 0:30:36.917 ********* 2026-03-31 05:05:33.797529 | orchestrator | skipping: [testbed-node-0] 2026-03-31 05:05:33.797539 | orchestrator | skipping: [testbed-node-1] 2026-03-31 05:05:33.797544 | orchestrator | skipping: [testbed-node-2] 2026-03-31 05:05:33.797549 | orchestrator | ok: [testbed-node-3] 2026-03-31 05:05:33.797554 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:05:33.797559 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:05:33.797564 | orchestrator | 2026-03-31 05:05:33.797570 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-31 05:05:33.797574 | orchestrator | Tuesday 31 March 2026 05:05:05 +0000 (0:00:00.899) 0:30:37.816 ********* 2026-03-31 05:05:33.797579 | orchestrator | skipping: [testbed-node-0] 2026-03-31 05:05:33.797583 | orchestrator | skipping: [testbed-node-1] 2026-03-31 05:05:33.797587 | orchestrator | skipping: [testbed-node-2] 2026-03-31 05:05:33.797592 | orchestrator | ok: [testbed-node-3] 2026-03-31 05:05:33.797596 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:05:33.797600 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:05:33.797624 | orchestrator | 2026-03-31 05:05:33.797629 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-31 05:05:33.797633 | orchestrator | Tuesday 31 March 2026 05:05:05 +0000 (0:00:00.605) 0:30:38.422 ********* 2026-03-31 05:05:33.797637 | orchestrator | skipping: [testbed-node-0] 2026-03-31 05:05:33.797642 | orchestrator | skipping: [testbed-node-1] 2026-03-31 05:05:33.797646 | orchestrator | skipping: [testbed-node-2] 2026-03-31 05:05:33.797650 | orchestrator | ok: [testbed-node-3] 2026-03-31 05:05:33.797654 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:05:33.797659 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:05:33.797663 | orchestrator | 2026-03-31 05:05:33.797668 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-31 05:05:33.797672 | orchestrator | Tuesday 31 March 2026 05:05:06 +0000 (0:00:00.553) 0:30:38.976 ********* 2026-03-31 05:05:33.797675 | orchestrator | skipping: [testbed-node-0] 2026-03-31 05:05:33.797679 | orchestrator | skipping: [testbed-node-1] 2026-03-31 05:05:33.797683 | orchestrator | skipping: [testbed-node-2] 2026-03-31 05:05:33.797687 | orchestrator | skipping: [testbed-node-3] 2026-03-31 05:05:33.797691 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:05:33.797695 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:05:33.797699 | orchestrator | 2026-03-31 05:05:33.797703 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-31 05:05:33.797707 | orchestrator | Tuesday 31 March 2026 05:05:07 +0000 (0:00:00.713) 0:30:39.689 ********* 2026-03-31 05:05:33.797711 | orchestrator | skipping: [testbed-node-0] 2026-03-31 05:05:33.797714 | orchestrator | skipping: [testbed-node-1] 2026-03-31 05:05:33.797718 | orchestrator | skipping: [testbed-node-2] 2026-03-31 05:05:33.797733 | orchestrator | skipping: [testbed-node-3] 2026-03-31 05:05:33.797737 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:05:33.797740 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:05:33.797744 | orchestrator | 2026-03-31 05:05:33.797748 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-31 05:05:33.797752 | orchestrator | Tuesday 31 March 2026 05:05:07 +0000 (0:00:00.527) 0:30:40.216 ********* 2026-03-31 05:05:33.797756 | orchestrator | ok: [testbed-node-0] 2026-03-31 05:05:33.797760 | orchestrator | ok: [testbed-node-1] 2026-03-31 05:05:33.797763 | orchestrator | ok: [testbed-node-2] 2026-03-31 05:05:33.797767 | orchestrator | skipping: [testbed-node-3] 2026-03-31 05:05:33.797771 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:05:33.797775 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:05:33.797779 | orchestrator | 2026-03-31 05:05:33.797783 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-31 05:05:33.797787 | orchestrator | Tuesday 31 March 2026 05:05:08 +0000 (0:00:00.738) 0:30:40.955 ********* 2026-03-31 05:05:33.797790 | orchestrator | ok: [testbed-node-0] 2026-03-31 05:05:33.797794 | orchestrator | ok: [testbed-node-1] 2026-03-31 05:05:33.797798 | orchestrator | ok: [testbed-node-2] 2026-03-31 05:05:33.797802 | orchestrator | ok: [testbed-node-3] 2026-03-31 05:05:33.797806 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:05:33.797810 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:05:33.797814 | orchestrator | 2026-03-31 05:05:33.797818 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-31 05:05:33.797822 | orchestrator | Tuesday 31 March 2026 05:05:08 +0000 (0:00:00.625) 0:30:41.581 ********* 2026-03-31 05:05:33.797825 | orchestrator | ok: [testbed-node-0] 2026-03-31 05:05:33.797829 | orchestrator | ok: [testbed-node-1] 2026-03-31 05:05:33.797833 | orchestrator | ok: [testbed-node-2] 2026-03-31 05:05:33.797838 | orchestrator | ok: [testbed-node-3] 2026-03-31 05:05:33.797844 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:05:33.797848 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:05:33.797852 | orchestrator | 2026-03-31 05:05:33.797856 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-03-31 05:05:33.797860 | orchestrator | Tuesday 31 March 2026 05:05:10 +0000 (0:00:01.138) 0:30:42.719 ********* 2026-03-31 05:05:33.797868 | orchestrator | ok: [testbed-node-0] 2026-03-31 05:05:33.797872 | orchestrator | 2026-03-31 05:05:33.797876 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-03-31 05:05:33.797880 | orchestrator | Tuesday 31 March 2026 05:05:12 +0000 (0:00:01.997) 0:30:44.717 ********* 2026-03-31 05:05:33.797883 | orchestrator | ok: [testbed-node-0] 2026-03-31 05:05:33.797887 | orchestrator | 2026-03-31 05:05:33.797891 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-03-31 05:05:33.797895 | orchestrator | Tuesday 31 March 2026 05:05:14 +0000 (0:00:02.024) 0:30:46.742 ********* 2026-03-31 05:05:33.797899 | orchestrator | ok: [testbed-node-0] 2026-03-31 05:05:33.797903 | orchestrator | ok: [testbed-node-1] 2026-03-31 05:05:33.797906 | orchestrator | ok: [testbed-node-2] 2026-03-31 05:05:33.797910 | orchestrator | ok: [testbed-node-3] 2026-03-31 05:05:33.797914 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:05:33.797918 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:05:33.797922 | orchestrator | 2026-03-31 05:05:33.797925 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-03-31 05:05:33.797929 | orchestrator | Tuesday 31 March 2026 05:05:15 +0000 (0:00:01.754) 0:30:48.497 ********* 2026-03-31 05:05:33.797933 | orchestrator | ok: [testbed-node-0] 2026-03-31 05:05:33.797937 | orchestrator | ok: [testbed-node-1] 2026-03-31 05:05:33.797941 | orchestrator | ok: [testbed-node-2] 2026-03-31 05:05:33.797945 | orchestrator | ok: [testbed-node-3] 2026-03-31 05:05:33.797949 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:05:33.797955 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:05:33.797959 | orchestrator | 2026-03-31 05:05:33.797964 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-03-31 05:05:33.797979 | orchestrator | Tuesday 31 March 2026 05:05:16 +0000 (0:00:00.996) 0:30:49.494 ********* 2026-03-31 05:05:33.797985 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-31 05:05:33.797991 | orchestrator | 2026-03-31 05:05:33.797996 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-03-31 05:05:33.798000 | orchestrator | Tuesday 31 March 2026 05:05:18 +0000 (0:00:01.352) 0:30:50.846 ********* 2026-03-31 05:05:33.798005 | orchestrator | ok: [testbed-node-0] 2026-03-31 05:05:33.798009 | orchestrator | ok: [testbed-node-1] 2026-03-31 05:05:33.798014 | orchestrator | ok: [testbed-node-3] 2026-03-31 05:05:33.798049 | orchestrator | ok: [testbed-node-2] 2026-03-31 05:05:33.798053 | orchestrator | ok: [testbed-node-4] 2026-03-31 05:05:33.798057 | orchestrator | ok: [testbed-node-5] 2026-03-31 05:05:33.798060 | orchestrator | 2026-03-31 05:05:33.798064 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-03-31 05:05:33.798068 | orchestrator | Tuesday 31 March 2026 05:05:19 +0000 (0:00:01.733) 0:30:52.580 ********* 2026-03-31 05:05:33.798072 | orchestrator | changed: [testbed-node-3] 2026-03-31 05:05:33.798076 | orchestrator | changed: [testbed-node-0] 2026-03-31 05:05:33.798080 | orchestrator | changed: [testbed-node-5] 2026-03-31 05:05:33.798084 | orchestrator | changed: [testbed-node-4] 2026-03-31 05:05:33.798088 | orchestrator | changed: [testbed-node-1] 2026-03-31 05:05:33.798091 | orchestrator | changed: [testbed-node-2] 2026-03-31 05:05:33.798095 | orchestrator | 2026-03-31 05:05:33.798099 | orchestrator | PLAY [Complete upgrade] ******************************************************** 2026-03-31 05:05:33.798103 | orchestrator | 2026-03-31 05:05:33.798107 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-31 05:05:33.798111 | orchestrator | Tuesday 31 March 2026 05:05:23 +0000 (0:00:03.438) 0:30:56.018 ********* 2026-03-31 05:05:33.798114 | orchestrator | ok: [testbed-node-0] 2026-03-31 05:05:33.798118 | orchestrator | ok: [testbed-node-1] 2026-03-31 05:05:33.798122 | orchestrator | ok: [testbed-node-2] 2026-03-31 05:05:33.798126 | orchestrator | 2026-03-31 05:05:33.798130 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-31 05:05:33.798134 | orchestrator | Tuesday 31 March 2026 05:05:24 +0000 (0:00:00.668) 0:30:56.687 ********* 2026-03-31 05:05:33.798141 | orchestrator | ok: [testbed-node-0] 2026-03-31 05:05:33.798145 | orchestrator | ok: [testbed-node-1] 2026-03-31 05:05:33.798148 | orchestrator | ok: [testbed-node-2] 2026-03-31 05:05:33.798152 | orchestrator | 2026-03-31 05:05:33.798156 | orchestrator | TASK [Container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-03-31 05:05:33.798164 | orchestrator | Tuesday 31 March 2026 05:05:24 +0000 (0:00:00.575) 0:30:57.262 ********* 2026-03-31 05:05:33.798168 | orchestrator | ok: [testbed-node-0] 2026-03-31 05:05:33.798172 | orchestrator | 2026-03-31 05:05:33.798176 | orchestrator | TASK [Non container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-03-31 05:05:33.798180 | orchestrator | Tuesday 31 March 2026 05:05:26 +0000 (0:00:01.626) 0:30:58.889 ********* 2026-03-31 05:05:33.798183 | orchestrator | skipping: [testbed-node-0] 2026-03-31 05:05:33.798187 | orchestrator | 2026-03-31 05:05:33.798191 | orchestrator | PLAY [Upgrade node-exporter] *************************************************** 2026-03-31 05:05:33.798195 | orchestrator | 2026-03-31 05:05:33.798199 | orchestrator | TASK [Stop node-exporter] ****************************************************** 2026-03-31 05:05:33.798203 | orchestrator | Tuesday 31 March 2026 05:05:27 +0000 (0:00:01.416) 0:31:00.306 ********* 2026-03-31 05:05:33.798206 | orchestrator | skipping: [testbed-node-0] 2026-03-31 05:05:33.798210 | orchestrator | skipping: [testbed-node-1] 2026-03-31 05:05:33.798214 | orchestrator | skipping: [testbed-node-2] 2026-03-31 05:05:33.798218 | orchestrator | skipping: [testbed-node-3] 2026-03-31 05:05:33.798222 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:05:33.798225 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:05:33.798229 | orchestrator | skipping: [testbed-manager] 2026-03-31 05:05:33.798233 | orchestrator | 2026-03-31 05:05:33.798237 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-31 05:05:33.798241 | orchestrator | Tuesday 31 March 2026 05:05:28 +0000 (0:00:00.997) 0:31:01.303 ********* 2026-03-31 05:05:33.798245 | orchestrator | skipping: [testbed-node-0] 2026-03-31 05:05:33.798248 | orchestrator | skipping: [testbed-node-1] 2026-03-31 05:05:33.798252 | orchestrator | skipping: [testbed-node-2] 2026-03-31 05:05:33.798256 | orchestrator | skipping: [testbed-node-3] 2026-03-31 05:05:33.798260 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:05:33.798264 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:05:33.798267 | orchestrator | skipping: [testbed-manager] 2026-03-31 05:05:33.798271 | orchestrator | 2026-03-31 05:05:33.798275 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-03-31 05:05:33.798279 | orchestrator | Tuesday 31 March 2026 05:05:30 +0000 (0:00:01.551) 0:31:02.855 ********* 2026-03-31 05:05:33.798283 | orchestrator | skipping: [testbed-node-0] 2026-03-31 05:05:33.798286 | orchestrator | skipping: [testbed-node-1] 2026-03-31 05:05:33.798290 | orchestrator | skipping: [testbed-node-2] 2026-03-31 05:05:33.798294 | orchestrator | skipping: [testbed-node-3] 2026-03-31 05:05:33.798298 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:05:33.798301 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:05:33.798305 | orchestrator | skipping: [testbed-manager] 2026-03-31 05:05:33.798309 | orchestrator | 2026-03-31 05:05:33.798313 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-03-31 05:05:33.798317 | orchestrator | Tuesday 31 March 2026 05:05:31 +0000 (0:00:01.537) 0:31:04.393 ********* 2026-03-31 05:05:33.798320 | orchestrator | skipping: [testbed-node-0] 2026-03-31 05:05:33.798324 | orchestrator | skipping: [testbed-node-1] 2026-03-31 05:05:33.798328 | orchestrator | skipping: [testbed-node-2] 2026-03-31 05:05:33.798332 | orchestrator | skipping: [testbed-node-3] 2026-03-31 05:05:33.798336 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:05:33.798339 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:05:33.798343 | orchestrator | skipping: [testbed-manager] 2026-03-31 05:05:33.798347 | orchestrator | 2026-03-31 05:05:33.798351 | orchestrator | TASK [ceph-node-exporter : Include setup_container.yml] ************************ 2026-03-31 05:05:33.798355 | orchestrator | Tuesday 31 March 2026 05:05:33 +0000 (0:00:01.540) 0:31:05.933 ********* 2026-03-31 05:05:33.798362 | orchestrator | skipping: [testbed-node-0] 2026-03-31 05:05:33.798366 | orchestrator | skipping: [testbed-node-1] 2026-03-31 05:05:33.798369 | orchestrator | skipping: [testbed-node-2] 2026-03-31 05:05:33.798376 | orchestrator | skipping: [testbed-node-3] 2026-03-31 05:05:50.052782 | orchestrator | skipping: [testbed-node-4] 2026-03-31 05:05:50.052896 | orchestrator | skipping: [testbed-node-5] 2026-03-31 05:05:50.052910 | orchestrator | skipping: [testbed-manager] 2026-03-31 05:05:50.052922 | orchestrator | 2026-03-31 05:05:50.052933 | orchestrator | PLAY [Upgrade monitoring node] ************************************************* 2026-03-31 05:05:50.052945 | orchestrator | 2026-03-31 05:05:50.052955 | orchestrator | TASK [Stop monitoring services] ************************************************ 2026-03-31 05:05:50.052966 | orchestrator | Tuesday 31 March 2026 05:05:34 +0000 (0:00:01.702) 0:31:07.635 ********* 2026-03-31 05:05:50.052976 | orchestrator | skipping: [testbed-manager] => (item=alertmanager)  2026-03-31 05:05:50.052987 | orchestrator | skipping: [testbed-manager] => (item=prometheus)  2026-03-31 05:05:50.052997 | orchestrator | skipping: [testbed-manager] => (item=grafana-server)  2026-03-31 05:05:50.053007 | orchestrator | skipping: [testbed-manager] 2026-03-31 05:05:50.053017 | orchestrator | 2026-03-31 05:05:50.053027 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-03-31 05:05:50.053037 | orchestrator | Tuesday 31 March 2026 05:05:35 +0000 (0:00:00.171) 0:31:07.807 ********* 2026-03-31 05:05:50.053046 | orchestrator | skipping: [testbed-manager] 2026-03-31 05:05:50.053056 | orchestrator | 2026-03-31 05:05:50.053066 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-03-31 05:05:50.053076 | orchestrator | Tuesday 31 March 2026 05:05:35 +0000 (0:00:00.167) 0:31:07.974 ********* 2026-03-31 05:05:50.053086 | orchestrator | skipping: [testbed-manager] 2026-03-31 05:05:50.053096 | orchestrator | 2026-03-31 05:05:50.053106 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-03-31 05:05:50.053116 | orchestrator | Tuesday 31 March 2026 05:05:35 +0000 (0:00:00.147) 0:31:08.122 ********* 2026-03-31 05:05:50.053126 | orchestrator | skipping: [testbed-manager] 2026-03-31 05:05:50.053135 | orchestrator | 2026-03-31 05:05:50.053145 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-03-31 05:05:50.053155 | orchestrator | Tuesday 31 March 2026 05:05:35 +0000 (0:00:00.438) 0:31:08.560 ********* 2026-03-31 05:05:50.053165 | orchestrator | skipping: [testbed-manager] 2026-03-31 05:05:50.053175 | orchestrator | 2026-03-31 05:05:50.053185 | orchestrator | TASK [ceph-prometheus : Create prometheus directories] ************************* 2026-03-31 05:05:50.053194 | orchestrator | Tuesday 31 March 2026 05:05:36 +0000 (0:00:00.246) 0:31:08.807 ********* 2026-03-31 05:05:50.053204 | orchestrator | skipping: [testbed-manager] => (item=/etc/prometheus)  2026-03-31 05:05:50.053214 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/prometheus)  2026-03-31 05:05:50.053240 | orchestrator | skipping: [testbed-manager] 2026-03-31 05:05:50.053250 | orchestrator | 2026-03-31 05:05:50.053260 | orchestrator | TASK [ceph-prometheus : Write prometheus config file] ************************** 2026-03-31 05:05:50.053270 | orchestrator | Tuesday 31 March 2026 05:05:36 +0000 (0:00:00.166) 0:31:08.974 ********* 2026-03-31 05:05:50.053280 | orchestrator | skipping: [testbed-manager] 2026-03-31 05:05:50.053290 | orchestrator | 2026-03-31 05:05:50.053300 | orchestrator | TASK [ceph-prometheus : Make sure the alerting rules directory exists] ********* 2026-03-31 05:05:50.053310 | orchestrator | Tuesday 31 March 2026 05:05:36 +0000 (0:00:00.153) 0:31:09.127 ********* 2026-03-31 05:05:50.053322 | orchestrator | skipping: [testbed-manager] 2026-03-31 05:05:50.053334 | orchestrator | 2026-03-31 05:05:50.053347 | orchestrator | TASK [ceph-prometheus : Copy alerting rules] *********************************** 2026-03-31 05:05:50.053358 | orchestrator | Tuesday 31 March 2026 05:05:36 +0000 (0:00:00.148) 0:31:09.275 ********* 2026-03-31 05:05:50.053370 | orchestrator | skipping: [testbed-manager] 2026-03-31 05:05:50.053460 | orchestrator | 2026-03-31 05:05:50.053472 | orchestrator | TASK [ceph-prometheus : Create alertmanager directories] *********************** 2026-03-31 05:05:50.053511 | orchestrator | Tuesday 31 March 2026 05:05:36 +0000 (0:00:00.135) 0:31:09.411 ********* 2026-03-31 05:05:50.053523 | orchestrator | skipping: [testbed-manager] => (item=/etc/alertmanager)  2026-03-31 05:05:50.053534 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/alertmanager)  2026-03-31 05:05:50.053546 | orchestrator | skipping: [testbed-manager] 2026-03-31 05:05:50.053557 | orchestrator | 2026-03-31 05:05:50.053569 | orchestrator | TASK [ceph-prometheus : Write alertmanager config file] ************************ 2026-03-31 05:05:50.053580 | orchestrator | Tuesday 31 March 2026 05:05:36 +0000 (0:00:00.164) 0:31:09.575 ********* 2026-03-31 05:05:50.053592 | orchestrator | skipping: [testbed-manager] 2026-03-31 05:05:50.053603 | orchestrator | 2026-03-31 05:05:50.053615 | orchestrator | TASK [ceph-prometheus : Include setup_container.yml] *************************** 2026-03-31 05:05:50.053626 | orchestrator | Tuesday 31 March 2026 05:05:37 +0000 (0:00:00.151) 0:31:09.727 ********* 2026-03-31 05:05:50.053638 | orchestrator | skipping: [testbed-manager] 2026-03-31 05:05:50.053649 | orchestrator | 2026-03-31 05:05:50.053660 | orchestrator | TASK [ceph-grafana : Include setup_container.yml] ****************************** 2026-03-31 05:05:50.053673 | orchestrator | Tuesday 31 March 2026 05:05:37 +0000 (0:00:00.543) 0:31:10.270 ********* 2026-03-31 05:05:50.053684 | orchestrator | skipping: [testbed-manager] 2026-03-31 05:05:50.053694 | orchestrator | 2026-03-31 05:05:50.053704 | orchestrator | TASK [ceph-grafana : Include configure_grafana.yml] **************************** 2026-03-31 05:05:50.053714 | orchestrator | Tuesday 31 March 2026 05:05:37 +0000 (0:00:00.142) 0:31:10.412 ********* 2026-03-31 05:05:50.053723 | orchestrator | skipping: [testbed-manager] 2026-03-31 05:05:50.053733 | orchestrator | 2026-03-31 05:05:50.053743 | orchestrator | PLAY [Upgrade ceph dashboard] ************************************************** 2026-03-31 05:05:50.053753 | orchestrator | 2026-03-31 05:05:50.053763 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-31 05:05:50.053773 | orchestrator | Tuesday 31 March 2026 05:05:38 +0000 (0:00:00.507) 0:31:10.920 ********* 2026-03-31 05:05:50.053783 | orchestrator | skipping: [testbed-node-0] 2026-03-31 05:05:50.053793 | orchestrator | skipping: [testbed-node-1] 2026-03-31 05:05:50.053803 | orchestrator | skipping: [testbed-node-2] 2026-03-31 05:05:50.053813 | orchestrator | 2026-03-31 05:05:50.053823 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-03-31 05:05:50.053833 | orchestrator | Tuesday 31 March 2026 05:05:38 +0000 (0:00:00.553) 0:31:11.473 ********* 2026-03-31 05:05:50.053843 | orchestrator | skipping: [testbed-node-0] 2026-03-31 05:05:50.053853 | orchestrator | skipping: [testbed-node-1] 2026-03-31 05:05:50.053881 | orchestrator | skipping: [testbed-node-2] 2026-03-31 05:05:50.053891 | orchestrator | 2026-03-31 05:05:50.053901 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-03-31 05:05:50.053911 | orchestrator | Tuesday 31 March 2026 05:05:39 +0000 (0:00:00.603) 0:31:12.077 ********* 2026-03-31 05:05:50.053920 | orchestrator | skipping: [testbed-node-0] 2026-03-31 05:05:50.053930 | orchestrator | skipping: [testbed-node-1] 2026-03-31 05:05:50.053940 | orchestrator | skipping: [testbed-node-2] 2026-03-31 05:05:50.053950 | orchestrator | 2026-03-31 05:05:50.053960 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-03-31 05:05:50.053969 | orchestrator | Tuesday 31 March 2026 05:05:39 +0000 (0:00:00.311) 0:31:12.388 ********* 2026-03-31 05:05:50.053979 | orchestrator | skipping: [testbed-node-0] 2026-03-31 05:05:50.053989 | orchestrator | skipping: [testbed-node-1] 2026-03-31 05:05:50.053998 | orchestrator | skipping: [testbed-node-2] 2026-03-31 05:05:50.054008 | orchestrator | 2026-03-31 05:05:50.054073 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-03-31 05:05:50.054084 | orchestrator | Tuesday 31 March 2026 05:05:40 +0000 (0:00:00.360) 0:31:12.749 ********* 2026-03-31 05:05:50.054094 | orchestrator | skipping: [testbed-node-0] 2026-03-31 05:05:50.054104 | orchestrator | skipping: [testbed-node-1] 2026-03-31 05:05:50.054113 | orchestrator | skipping: [testbed-node-2] 2026-03-31 05:05:50.054123 | orchestrator | 2026-03-31 05:05:50.054147 | orchestrator | TASK [ceph-dashboard : Include configure_dashboard.yml] ************************ 2026-03-31 05:05:50.054157 | orchestrator | Tuesday 31 March 2026 05:05:40 +0000 (0:00:00.827) 0:31:13.577 ********* 2026-03-31 05:05:50.054166 | orchestrator | skipping: [testbed-node-0] 2026-03-31 05:05:50.054176 | orchestrator | skipping: [testbed-node-1] 2026-03-31 05:05:50.054186 | orchestrator | skipping: [testbed-node-2] 2026-03-31 05:05:50.054196 | orchestrator | 2026-03-31 05:05:50.054206 | orchestrator | TASK [ceph-dashboard : Print dashboard URL] ************************************ 2026-03-31 05:05:50.054216 | orchestrator | Tuesday 31 March 2026 05:05:41 +0000 (0:00:00.313) 0:31:13.891 ********* 2026-03-31 05:05:50.054226 | orchestrator | skipping: [testbed-node-0] 2026-03-31 05:05:50.054238 | orchestrator | 2026-03-31 05:05:50.054255 | orchestrator | PLAY [Switch any existing crush buckets to straw2] ***************************** 2026-03-31 05:05:50.054265 | orchestrator | 2026-03-31 05:05:50.054275 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-31 05:05:50.054285 | orchestrator | Tuesday 31 March 2026 05:05:41 +0000 (0:00:00.425) 0:31:14.317 ********* 2026-03-31 05:05:50.054295 | orchestrator | ok: [testbed-node-0] 2026-03-31 05:05:50.054305 | orchestrator | 2026-03-31 05:05:50.054315 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-31 05:05:50.054331 | orchestrator | Tuesday 31 March 2026 05:05:42 +0000 (0:00:00.456) 0:31:14.773 ********* 2026-03-31 05:05:50.054341 | orchestrator | ok: [testbed-node-0] 2026-03-31 05:05:50.054351 | orchestrator | 2026-03-31 05:05:50.054361 | orchestrator | TASK [Set_fact ceph_cmd] ******************************************************* 2026-03-31 05:05:50.054371 | orchestrator | Tuesday 31 March 2026 05:05:42 +0000 (0:00:00.205) 0:31:14.979 ********* 2026-03-31 05:05:50.054406 | orchestrator | ok: [testbed-node-0] 2026-03-31 05:05:50.054423 | orchestrator | 2026-03-31 05:05:50.054442 | orchestrator | TASK [Backup the crushmap] ***************************************************** 2026-03-31 05:05:50.054458 | orchestrator | Tuesday 31 March 2026 05:05:42 +0000 (0:00:00.447) 0:31:15.426 ********* 2026-03-31 05:05:50.054475 | orchestrator | ok: [testbed-node-0] 2026-03-31 05:05:50.054485 | orchestrator | 2026-03-31 05:05:50.054495 | orchestrator | TASK [Switch crush buckets to straw2] ****************************************** 2026-03-31 05:05:50.054505 | orchestrator | Tuesday 31 March 2026 05:05:44 +0000 (0:00:01.751) 0:31:17.178 ********* 2026-03-31 05:05:50.054514 | orchestrator | ok: [testbed-node-0] 2026-03-31 05:05:50.054524 | orchestrator | 2026-03-31 05:05:50.054534 | orchestrator | TASK [Remove crushmap backup] ************************************************** 2026-03-31 05:05:50.054543 | orchestrator | Tuesday 31 March 2026 05:05:46 +0000 (0:00:01.997) 0:31:19.176 ********* 2026-03-31 05:05:50.054553 | orchestrator | changed: [testbed-node-0] 2026-03-31 05:05:50.054563 | orchestrator | 2026-03-31 05:05:50.054573 | orchestrator | PLAY [Show ceph status] ******************************************************** 2026-03-31 05:05:50.054582 | orchestrator | 2026-03-31 05:05:50.054592 | orchestrator | TASK [Set_fact container_exec_cmd_status] ************************************** 2026-03-31 05:05:50.054602 | orchestrator | Tuesday 31 March 2026 05:05:47 +0000 (0:00:00.703) 0:31:19.879 ********* 2026-03-31 05:05:50.054612 | orchestrator | ok: [testbed-node-0] 2026-03-31 05:05:50.054622 | orchestrator | ok: [testbed-node-1] 2026-03-31 05:05:50.054631 | orchestrator | ok: [testbed-node-2] 2026-03-31 05:05:50.054641 | orchestrator | 2026-03-31 05:05:50.054651 | orchestrator | TASK [Show ceph status] ******************************************************** 2026-03-31 05:05:50.054660 | orchestrator | Tuesday 31 March 2026 05:05:47 +0000 (0:00:00.448) 0:31:20.328 ********* 2026-03-31 05:05:50.054670 | orchestrator | ok: [testbed-node-0] 2026-03-31 05:05:50.054680 | orchestrator | 2026-03-31 05:05:50.054689 | orchestrator | TASK [Show all daemons version] ************************************************ 2026-03-31 05:05:50.054699 | orchestrator | Tuesday 31 March 2026 05:05:48 +0000 (0:00:01.244) 0:31:21.572 ********* 2026-03-31 05:05:50.054709 | orchestrator | ok: [testbed-node-0] 2026-03-31 05:05:50.054719 | orchestrator | 2026-03-31 05:05:50.054728 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 05:05:50.054739 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-31 05:05:50.054759 | orchestrator | testbed-manager : ok=25  changed=1  unreachable=0 failed=0 skipped=76  rescued=0 ignored=0 2026-03-31 05:05:50.054771 | orchestrator | testbed-node-0 : ok=248  changed=20  unreachable=0 failed=0 skipped=376  rescued=0 ignored=0 2026-03-31 05:05:50.054780 | orchestrator | testbed-node-1 : ok=191  changed=16  unreachable=0 failed=0 skipped=350  rescued=0 ignored=0 2026-03-31 05:05:50.054798 | orchestrator | testbed-node-2 : ok=196  changed=15  unreachable=0 failed=0 skipped=351  rescued=0 ignored=0 2026-03-31 05:05:50.814205 | orchestrator | testbed-node-3 : ok=311  changed=22  unreachable=0 failed=0 skipped=348  rescued=0 ignored=0 2026-03-31 05:05:50.814519 | orchestrator | testbed-node-4 : ok=307  changed=18  unreachable=0 failed=0 skipped=359  rescued=0 ignored=0 2026-03-31 05:05:50.814546 | orchestrator | testbed-node-5 : ok=309  changed=17  unreachable=0 failed=0 skipped=358  rescued=0 ignored=0 2026-03-31 05:05:50.814560 | orchestrator | 2026-03-31 05:05:50.814573 | orchestrator | 2026-03-31 05:05:50.814584 | orchestrator | 2026-03-31 05:05:50.814596 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 05:05:50.814609 | orchestrator | Tuesday 31 March 2026 05:05:50 +0000 (0:00:01.137) 0:31:22.711 ********* 2026-03-31 05:05:50.814620 | orchestrator | =============================================================================== 2026-03-31 05:05:50.814631 | orchestrator | Re-enable pg autoscale on pools ---------------------------------------- 68.74s 2026-03-31 05:05:50.814642 | orchestrator | Disable pg autoscale on pools ------------------------------------------ 68.56s 2026-03-31 05:05:50.814653 | orchestrator | Waiting for clean pgs... ----------------------------------------------- 38.85s 2026-03-31 05:05:50.814664 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 29.54s 2026-03-31 05:05:50.814675 | orchestrator | Gather and delegate facts ---------------------------------------------- 29.31s 2026-03-31 05:05:50.814686 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 29.04s 2026-03-31 05:05:50.814698 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 29.00s 2026-03-31 05:05:50.814709 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 26.01s 2026-03-31 05:05:50.814720 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.99s 2026-03-31 05:05:50.814731 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.91s 2026-03-31 05:05:50.814762 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 20.87s 2026-03-31 05:05:50.814775 | orchestrator | ceph-infra : Update cache for Debian based OSs ------------------------- 17.77s 2026-03-31 05:05:50.814788 | orchestrator | Stop ceph mgr ---------------------------------------------------------- 15.68s 2026-03-31 05:05:50.814801 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 13.45s 2026-03-31 05:05:50.814813 | orchestrator | Create potentially missing keys (rbd and rbd-mirror) ------------------- 13.44s 2026-03-31 05:05:50.814825 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 11.79s 2026-03-31 05:05:50.814838 | orchestrator | Restart active mds ----------------------------------------------------- 10.69s 2026-03-31 05:05:50.814850 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 10.61s 2026-03-31 05:05:50.814862 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 10.12s 2026-03-31 05:05:50.814875 | orchestrator | Stop ceph osd ----------------------------------------------------------- 9.24s 2026-03-31 05:05:51.118836 | orchestrator | + osism apply cephclient 2026-03-31 05:06:03.160945 | orchestrator | 2026-03-31 05:06:03 | INFO  | Task 2ee89ee8-8d85-4235-99d0-a0494a76e993 (cephclient) was prepared for execution. 2026-03-31 05:06:03.161052 | orchestrator | 2026-03-31 05:06:03 | INFO  | It takes a moment until task 2ee89ee8-8d85-4235-99d0-a0494a76e993 (cephclient) has been started and output is visible here. 2026-03-31 05:06:18.695859 | orchestrator | 2026-03-31 05:06:18.696020 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-03-31 05:06:18.696039 | orchestrator | 2026-03-31 05:06:18.696052 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-03-31 05:06:18.696097 | orchestrator | Tuesday 31 March 2026 05:06:07 +0000 (0:00:00.238) 0:00:00.238 ********* 2026-03-31 05:06:18.696111 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-03-31 05:06:18.696124 | orchestrator | 2026-03-31 05:06:18.696136 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-03-31 05:06:18.696147 | orchestrator | Tuesday 31 March 2026 05:06:07 +0000 (0:00:00.236) 0:00:00.475 ********* 2026-03-31 05:06:18.696160 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-03-31 05:06:18.696171 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/data) 2026-03-31 05:06:18.696184 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-03-31 05:06:18.696195 | orchestrator | 2026-03-31 05:06:18.696207 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-03-31 05:06:18.696218 | orchestrator | Tuesday 31 March 2026 05:06:09 +0000 (0:00:01.597) 0:00:02.072 ********* 2026-03-31 05:06:18.696230 | orchestrator | ok: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-03-31 05:06:18.696242 | orchestrator | 2026-03-31 05:06:18.696253 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-03-31 05:06:18.696264 | orchestrator | Tuesday 31 March 2026 05:06:10 +0000 (0:00:01.295) 0:00:03.368 ********* 2026-03-31 05:06:18.696275 | orchestrator | ok: [testbed-manager] 2026-03-31 05:06:18.696287 | orchestrator | 2026-03-31 05:06:18.696297 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-03-31 05:06:18.696309 | orchestrator | Tuesday 31 March 2026 05:06:11 +0000 (0:00:00.930) 0:00:04.298 ********* 2026-03-31 05:06:18.696321 | orchestrator | ok: [testbed-manager] 2026-03-31 05:06:18.696341 | orchestrator | 2026-03-31 05:06:18.696387 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-03-31 05:06:18.696406 | orchestrator | Tuesday 31 March 2026 05:06:12 +0000 (0:00:00.896) 0:00:05.195 ********* 2026-03-31 05:06:18.696425 | orchestrator | ok: [testbed-manager] 2026-03-31 05:06:18.696447 | orchestrator | 2026-03-31 05:06:18.696465 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-03-31 05:06:18.696483 | orchestrator | Tuesday 31 March 2026 05:06:13 +0000 (0:00:01.073) 0:00:06.269 ********* 2026-03-31 05:06:18.696501 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-03-31 05:06:18.696520 | orchestrator | ok: [testbed-manager] => (item=ceph-authtool) 2026-03-31 05:06:18.696541 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-03-31 05:06:18.696561 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-03-31 05:06:18.696580 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-03-31 05:06:18.696600 | orchestrator | 2026-03-31 05:06:18.696614 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-03-31 05:06:18.696628 | orchestrator | Tuesday 31 March 2026 05:06:17 +0000 (0:00:03.987) 0:00:10.257 ********* 2026-03-31 05:06:18.696641 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-03-31 05:06:18.696654 | orchestrator | 2026-03-31 05:06:18.696667 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-03-31 05:06:18.696679 | orchestrator | Tuesday 31 March 2026 05:06:17 +0000 (0:00:00.467) 0:00:10.724 ********* 2026-03-31 05:06:18.696723 | orchestrator | skipping: [testbed-manager] 2026-03-31 05:06:18.696734 | orchestrator | 2026-03-31 05:06:18.696746 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-03-31 05:06:18.696757 | orchestrator | Tuesday 31 March 2026 05:06:18 +0000 (0:00:00.138) 0:00:10.862 ********* 2026-03-31 05:06:18.696772 | orchestrator | skipping: [testbed-manager] 2026-03-31 05:06:18.696791 | orchestrator | 2026-03-31 05:06:18.696803 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-31 05:06:18.696815 | orchestrator | testbed-manager : ok=8  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-31 05:06:18.696827 | orchestrator | 2026-03-31 05:06:18.696838 | orchestrator | 2026-03-31 05:06:18.696849 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-31 05:06:18.696874 | orchestrator | Tuesday 31 March 2026 05:06:18 +0000 (0:00:00.161) 0:00:11.024 ********* 2026-03-31 05:06:18.696886 | orchestrator | =============================================================================== 2026-03-31 05:06:18.696899 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.99s 2026-03-31 05:06:18.696917 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.60s 2026-03-31 05:06:18.696935 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.30s 2026-03-31 05:06:18.696955 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------- 1.07s 2026-03-31 05:06:18.696973 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.93s 2026-03-31 05:06:18.696992 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.90s 2026-03-31 05:06:18.697004 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.47s 2026-03-31 05:06:18.697015 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.24s 2026-03-31 05:06:18.697026 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.16s 2026-03-31 05:06:18.697036 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.14s 2026-03-31 05:06:19.006943 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-03-31 05:06:19.007020 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/300-openstack.sh 2026-03-31 05:06:19.013709 | orchestrator | + set -e 2026-03-31 05:06:19.013775 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-31 05:06:19.013790 | orchestrator | ++ export INTERACTIVE=false 2026-03-31 05:06:19.013811 | orchestrator | ++ INTERACTIVE=false 2026-03-31 05:06:19.013830 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-31 05:06:19.013851 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-31 05:06:19.013870 | orchestrator | + source /opt/manager-vars.sh 2026-03-31 05:06:19.014526 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-31 05:06:19.014551 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-31 05:06:19.014565 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-31 05:06:19.014578 | orchestrator | ++ CEPH_VERSION=reef 2026-03-31 05:06:19.014591 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-31 05:06:19.014605 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-31 05:06:19.014619 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-31 05:06:19.014631 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-31 05:06:19.014642 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-31 05:06:19.014653 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-31 05:06:19.014665 | orchestrator | ++ export ARA=false 2026-03-31 05:06:19.014676 | orchestrator | ++ ARA=false 2026-03-31 05:06:19.014687 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-31 05:06:19.014698 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-31 05:06:19.014709 | orchestrator | ++ export TEMPEST=false 2026-03-31 05:06:19.014720 | orchestrator | ++ TEMPEST=false 2026-03-31 05:06:19.014731 | orchestrator | ++ export IS_ZUUL=true 2026-03-31 05:06:19.014742 | orchestrator | ++ IS_ZUUL=true 2026-03-31 05:06:19.014754 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.240 2026-03-31 05:06:19.014765 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.240 2026-03-31 05:06:19.014776 | orchestrator | ++ export EXTERNAL_API=false 2026-03-31 05:06:19.014787 | orchestrator | ++ EXTERNAL_API=false 2026-03-31 05:06:19.014798 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-31 05:06:19.014809 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-31 05:06:19.014821 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-31 05:06:19.014860 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-31 05:06:19.014872 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-31 05:06:19.014883 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-31 05:06:19.014894 | orchestrator | ++ export RABBITMQ3TO4=true 2026-03-31 05:06:19.014905 | orchestrator | ++ RABBITMQ3TO4=true 2026-03-31 05:06:19.014916 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-31 05:06:19.015695 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-31 05:06:19.025214 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-03-31 05:06:19.025273 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-03-31 05:06:19.025295 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-31 05:06:19.025314 | orchestrator | + osism migrate rabbitmq3to4 prepare 2026-03-31 05:06:20.804599 | orchestrator | osism: 'migrate rabbitmq3to4 prepare' is not an osism command. See 'osism --help'. 2026-03-31 05:06:20.804699 | orchestrator | Did you mean one of these? 2026-03-31 05:06:20.804717 | orchestrator | manage baremetal burnin 2026-03-31 05:06:20.804730 | orchestrator | manage baremetal clean 2026-03-31 05:06:20.804741 | orchestrator | manage baremetal delete 2026-03-31 05:06:20.804752 | orchestrator | manage baremetal deploy 2026-03-31 05:06:20.804763 | orchestrator | manage baremetal dump 2026-03-31 05:06:20.804775 | orchestrator | manage baremetal list 2026-03-31 05:06:20.804786 | orchestrator | manage baremetal maintenance set 2026-03-31 05:06:20.804798 | orchestrator | manage baremetal maintenance unset 2026-03-31 05:06:20.804809 | orchestrator | manage baremetal ping 2026-03-31 05:06:20.804820 | orchestrator | manage baremetal power off 2026-03-31 05:06:20.804831 | orchestrator | manage baremetal power on 2026-03-31 05:06:20.804842 | orchestrator | manage baremetal provide 2026-03-31 05:06:20.804853 | orchestrator | manage baremetal undeploy 2026-03-31 05:06:20.804864 | orchestrator | manage compute disable 2026-03-31 05:06:20.804874 | orchestrator | manage compute enable 2026-03-31 05:06:20.804885 | orchestrator | manage compute evacuate 2026-03-31 05:06:20.804896 | orchestrator | manage compute list 2026-03-31 05:06:20.804907 | orchestrator | manage compute migrate 2026-03-31 05:06:20.804919 | orchestrator | manage compute migration list 2026-03-31 05:06:20.804930 | orchestrator | manage compute start 2026-03-31 05:06:20.804941 | orchestrator | manage compute stop 2026-03-31 05:06:20.804952 | orchestrator | manage dnsmasq 2026-03-31 05:06:20.804963 | orchestrator | manage flavors 2026-03-31 05:06:20.804974 | orchestrator | manage image clusterapi 2026-03-31 05:06:20.804985 | orchestrator | manage image clusterapi gardener 2026-03-31 05:06:20.804996 | orchestrator | manage image gardenlinux 2026-03-31 05:06:20.805006 | orchestrator | manage image octavia 2026-03-31 05:06:20.805017 | orchestrator | manage images 2026-03-31 05:06:20.805028 | orchestrator | manage netbox 2026-03-31 05:06:20.805039 | orchestrator | manage project create 2026-03-31 05:06:20.805049 | orchestrator | manage project sync 2026-03-31 05:06:20.805060 | orchestrator | manage redfish list 2026-03-31 05:06:20.805071 | orchestrator | manage server list 2026-03-31 05:06:20.805082 | orchestrator | manage server migrate 2026-03-31 05:06:20.805093 | orchestrator | manage volume list 2026-03-31 05:06:20.805104 | orchestrator | validate 2026-03-31 05:06:21.517589 | orchestrator | ERROR 2026-03-31 05:06:21.517830 | orchestrator | { 2026-03-31 05:06:21.517868 | orchestrator | "delta": "0:50:51.296512", 2026-03-31 05:06:21.517894 | orchestrator | "end": "2026-03-31 05:06:21.093192", 2026-03-31 05:06:21.517915 | orchestrator | "msg": "non-zero return code", 2026-03-31 05:06:21.517935 | orchestrator | "rc": 2, 2026-03-31 05:06:21.517954 | orchestrator | "start": "2026-03-31 04:15:29.796680" 2026-03-31 05:06:21.518022 | orchestrator | } failure 2026-03-31 05:06:21.785875 | 2026-03-31 05:06:21.786104 | PLAY RECAP 2026-03-31 05:06:21.786225 | orchestrator | ok: 30 changed: 11 unreachable: 0 failed: 1 skipped: 6 rescued: 0 ignored: 0 2026-03-31 05:06:21.786283 | 2026-03-31 05:06:22.169127 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/upgrade-stable.yml@main] 2026-03-31 05:06:22.173533 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-03-31 05:06:24.075162 | 2026-03-31 05:06:24.075361 | PLAY [Post output play] 2026-03-31 05:06:24.110491 | 2026-03-31 05:06:24.110697 | LOOP [stage-output : Register sources] 2026-03-31 05:06:24.200105 | 2026-03-31 05:06:24.200390 | TASK [stage-output : Check sudo] 2026-03-31 05:06:25.104634 | orchestrator | sudo: a password is required 2026-03-31 05:06:25.242250 | orchestrator | ok: Runtime: 0:00:00.013130 2026-03-31 05:06:25.258302 | 2026-03-31 05:06:25.258470 | LOOP [stage-output : Set source and destination for files and folders] 2026-03-31 05:06:25.298766 | 2026-03-31 05:06:25.299202 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-03-31 05:06:25.376747 | orchestrator | ok 2026-03-31 05:06:25.385180 | 2026-03-31 05:06:25.385321 | LOOP [stage-output : Ensure target folders exist] 2026-03-31 05:06:25.878866 | orchestrator | ok: "docs" 2026-03-31 05:06:25.879222 | 2026-03-31 05:06:26.144466 | orchestrator | ok: "artifacts" 2026-03-31 05:06:26.391434 | orchestrator | ok: "logs" 2026-03-31 05:06:26.410104 | 2026-03-31 05:06:26.410314 | LOOP [stage-output : Copy files and folders to staging folder] 2026-03-31 05:06:26.452552 | 2026-03-31 05:06:26.452926 | TASK [stage-output : Make all log files readable] 2026-03-31 05:06:26.738166 | orchestrator | ok 2026-03-31 05:06:26.748373 | 2026-03-31 05:06:26.748527 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-03-31 05:06:26.793565 | orchestrator | skipping: Conditional result was False 2026-03-31 05:06:26.810958 | 2026-03-31 05:06:26.811198 | TASK [stage-output : Discover log files for compression] 2026-03-31 05:06:26.835892 | orchestrator | skipping: Conditional result was False 2026-03-31 05:06:26.847941 | 2026-03-31 05:06:26.848135 | LOOP [stage-output : Archive everything from logs] 2026-03-31 05:06:26.892280 | 2026-03-31 05:06:26.892448 | PLAY [Post cleanup play] 2026-03-31 05:06:26.900730 | 2026-03-31 05:06:26.900847 | TASK [Set cloud fact (Zuul deployment)] 2026-03-31 05:06:26.956532 | orchestrator | ok 2026-03-31 05:06:26.967566 | 2026-03-31 05:06:26.967679 | TASK [Set cloud fact (local deployment)] 2026-03-31 05:06:26.993490 | orchestrator | skipping: Conditional result was False 2026-03-31 05:06:27.003517 | 2026-03-31 05:06:27.003648 | TASK [Clean the cloud environment] 2026-03-31 05:06:27.639238 | orchestrator | 2026-03-31 05:06:27 - clean up servers 2026-03-31 05:06:28.382402 | orchestrator | 2026-03-31 05:06:28 - testbed-manager 2026-03-31 05:06:28.469492 | orchestrator | 2026-03-31 05:06:28 - testbed-node-0 2026-03-31 05:06:28.554030 | orchestrator | 2026-03-31 05:06:28 - testbed-node-5 2026-03-31 05:06:28.638719 | orchestrator | 2026-03-31 05:06:28 - testbed-node-1 2026-03-31 05:06:28.744366 | orchestrator | 2026-03-31 05:06:28 - testbed-node-3 2026-03-31 05:06:28.847313 | orchestrator | 2026-03-31 05:06:28 - testbed-node-4 2026-03-31 05:06:28.954172 | orchestrator | 2026-03-31 05:06:28 - testbed-node-2 2026-03-31 05:06:29.037561 | orchestrator | 2026-03-31 05:06:29 - clean up keypairs 2026-03-31 05:06:29.060165 | orchestrator | 2026-03-31 05:06:29 - testbed 2026-03-31 05:06:29.095240 | orchestrator | 2026-03-31 05:06:29 - wait for servers to be gone 2026-03-31 05:06:38.163629 | orchestrator | 2026-03-31 05:06:38 - clean up ports 2026-03-31 05:06:38.343737 | orchestrator | 2026-03-31 05:06:38 - 00a2fd43-22e9-4648-ae0b-46a0ee35e86c 2026-03-31 05:06:38.618631 | orchestrator | 2026-03-31 05:06:38 - 6a969a3a-2bef-476b-9ec5-50474584d71e 2026-03-31 05:06:38.909320 | orchestrator | 2026-03-31 05:06:38 - bd11d251-bd45-4708-87ed-55f2c6b1e5ca 2026-03-31 05:06:39.583777 | orchestrator | 2026-03-31 05:06:39 - c53f51cd-ad88-4e8b-9dd6-d055866c6625 2026-03-31 05:06:39.831776 | orchestrator | 2026-03-31 05:06:39 - cf5c59b2-c807-4c91-bb19-00674e01bfc8 2026-03-31 05:06:40.048507 | orchestrator | 2026-03-31 05:06:40 - ecef56a8-6897-49da-86f9-efc0caa8fed1 2026-03-31 05:06:40.493523 | orchestrator | 2026-03-31 05:06:40 - f110b1d2-3c0d-4228-881b-d109ed930bf3 2026-03-31 05:06:40.725549 | orchestrator | 2026-03-31 05:06:40 - clean up volumes 2026-03-31 05:06:40.837863 | orchestrator | 2026-03-31 05:06:40 - testbed-volume-5-node-base 2026-03-31 05:06:40.875803 | orchestrator | 2026-03-31 05:06:40 - testbed-volume-manager-base 2026-03-31 05:06:40.919505 | orchestrator | 2026-03-31 05:06:40 - testbed-volume-4-node-base 2026-03-31 05:06:40.962117 | orchestrator | 2026-03-31 05:06:40 - testbed-volume-0-node-base 2026-03-31 05:06:41.003260 | orchestrator | 2026-03-31 05:06:41 - testbed-volume-3-node-base 2026-03-31 05:06:41.050006 | orchestrator | 2026-03-31 05:06:41 - testbed-volume-2-node-base 2026-03-31 05:06:41.092855 | orchestrator | 2026-03-31 05:06:41 - testbed-volume-1-node-base 2026-03-31 05:06:41.136995 | orchestrator | 2026-03-31 05:06:41 - testbed-volume-2-node-5 2026-03-31 05:06:41.182091 | orchestrator | 2026-03-31 05:06:41 - testbed-volume-5-node-5 2026-03-31 05:06:41.226339 | orchestrator | 2026-03-31 05:06:41 - testbed-volume-0-node-3 2026-03-31 05:06:41.267864 | orchestrator | 2026-03-31 05:06:41 - testbed-volume-1-node-4 2026-03-31 05:06:41.311190 | orchestrator | 2026-03-31 05:06:41 - testbed-volume-6-node-3 2026-03-31 05:06:41.351015 | orchestrator | 2026-03-31 05:06:41 - testbed-volume-3-node-3 2026-03-31 05:06:41.394285 | orchestrator | 2026-03-31 05:06:41 - testbed-volume-4-node-4 2026-03-31 05:06:41.441068 | orchestrator | 2026-03-31 05:06:41 - testbed-volume-7-node-4 2026-03-31 05:06:41.490168 | orchestrator | 2026-03-31 05:06:41 - testbed-volume-8-node-5 2026-03-31 05:06:41.535961 | orchestrator | 2026-03-31 05:06:41 - disconnect routers 2026-03-31 05:06:41.671660 | orchestrator | 2026-03-31 05:06:41 - testbed 2026-03-31 05:06:43.166824 | orchestrator | 2026-03-31 05:06:43 - clean up subnets 2026-03-31 05:06:43.223412 | orchestrator | 2026-03-31 05:06:43 - subnet-testbed-management 2026-03-31 05:06:43.380424 | orchestrator | 2026-03-31 05:06:43 - clean up networks 2026-03-31 05:06:43.567007 | orchestrator | 2026-03-31 05:06:43 - net-testbed-management 2026-03-31 05:06:43.851874 | orchestrator | 2026-03-31 05:06:43 - clean up security groups 2026-03-31 05:06:43.891088 | orchestrator | 2026-03-31 05:06:43 - testbed-node 2026-03-31 05:06:44.004920 | orchestrator | 2026-03-31 05:06:44 - testbed-management 2026-03-31 05:06:44.110794 | orchestrator | 2026-03-31 05:06:44 - clean up floating ips 2026-03-31 05:06:44.140033 | orchestrator | 2026-03-31 05:06:44 - 81.163.193.240 2026-03-31 05:06:44.505840 | orchestrator | 2026-03-31 05:06:44 - clean up routers 2026-03-31 05:06:44.607338 | orchestrator | 2026-03-31 05:06:44 - testbed 2026-03-31 05:06:45.562887 | orchestrator | ok: Runtime: 0:00:18.119844 2026-03-31 05:06:45.567218 | 2026-03-31 05:06:45.567379 | PLAY RECAP 2026-03-31 05:06:45.567499 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-03-31 05:06:45.567560 | 2026-03-31 05:06:45.717896 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-03-31 05:06:45.719506 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-03-31 05:06:46.469035 | 2026-03-31 05:06:46.469219 | PLAY [Cleanup play] 2026-03-31 05:06:46.485932 | 2026-03-31 05:06:46.486111 | TASK [Set cloud fact (Zuul deployment)] 2026-03-31 05:06:46.541871 | orchestrator | ok 2026-03-31 05:06:46.550509 | 2026-03-31 05:06:46.550713 | TASK [Set cloud fact (local deployment)] 2026-03-31 05:06:46.595598 | orchestrator | skipping: Conditional result was False 2026-03-31 05:06:46.613696 | 2026-03-31 05:06:46.613882 | TASK [Clean the cloud environment] 2026-03-31 05:06:47.776253 | orchestrator | 2026-03-31 05:06:47 - clean up servers 2026-03-31 05:06:48.248938 | orchestrator | 2026-03-31 05:06:48 - clean up keypairs 2026-03-31 05:06:48.268265 | orchestrator | 2026-03-31 05:06:48 - wait for servers to be gone 2026-03-31 05:06:48.316273 | orchestrator | 2026-03-31 05:06:48 - clean up ports 2026-03-31 05:06:48.401103 | orchestrator | 2026-03-31 05:06:48 - clean up volumes 2026-03-31 05:06:48.464871 | orchestrator | 2026-03-31 05:06:48 - disconnect routers 2026-03-31 05:06:48.491898 | orchestrator | 2026-03-31 05:06:48 - clean up subnets 2026-03-31 05:06:48.516988 | orchestrator | 2026-03-31 05:06:48 - clean up networks 2026-03-31 05:06:48.676387 | orchestrator | 2026-03-31 05:06:48 - clean up security groups 2026-03-31 05:06:48.711391 | orchestrator | 2026-03-31 05:06:48 - clean up floating ips 2026-03-31 05:06:48.735132 | orchestrator | 2026-03-31 05:06:48 - clean up routers 2026-03-31 05:06:49.157866 | orchestrator | ok: Runtime: 0:00:01.366573 2026-03-31 05:06:49.161771 | 2026-03-31 05:06:49.161946 | PLAY RECAP 2026-03-31 05:06:49.162172 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-03-31 05:06:49.162256 | 2026-03-31 05:06:49.293943 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-03-31 05:06:49.296223 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-31 05:06:50.062885 | 2026-03-31 05:06:50.063108 | PLAY [Base post-fetch] 2026-03-31 05:06:50.079395 | 2026-03-31 05:06:50.079553 | TASK [fetch-output : Set log path for multiple nodes] 2026-03-31 05:06:50.135840 | orchestrator | skipping: Conditional result was False 2026-03-31 05:06:50.150911 | 2026-03-31 05:06:50.151174 | TASK [fetch-output : Set log path for single node] 2026-03-31 05:06:50.197579 | orchestrator | ok 2026-03-31 05:06:50.205940 | 2026-03-31 05:06:50.206099 | LOOP [fetch-output : Ensure local output dirs] 2026-03-31 05:06:50.703508 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/6dc27caeaea747b9b7722bbf633814ae/work/logs" 2026-03-31 05:06:50.980147 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/6dc27caeaea747b9b7722bbf633814ae/work/artifacts" 2026-03-31 05:06:51.260704 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/6dc27caeaea747b9b7722bbf633814ae/work/docs" 2026-03-31 05:06:51.282561 | 2026-03-31 05:06:51.282801 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-03-31 05:06:52.301263 | orchestrator | changed: .d..t...... ./ 2026-03-31 05:06:52.301696 | orchestrator | changed: All items complete 2026-03-31 05:06:52.301774 | 2026-03-31 05:06:53.021341 | orchestrator | changed: .d..t...... ./ 2026-03-31 05:06:53.748231 | orchestrator | changed: .d..t...... ./ 2026-03-31 05:06:53.780665 | 2026-03-31 05:06:53.780825 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-03-31 05:06:53.822361 | orchestrator | skipping: Conditional result was False 2026-03-31 05:06:53.825880 | orchestrator | skipping: Conditional result was False 2026-03-31 05:06:53.841277 | 2026-03-31 05:06:53.841402 | PLAY RECAP 2026-03-31 05:06:53.841475 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-03-31 05:06:53.841511 | 2026-03-31 05:06:53.980668 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-31 05:06:53.983730 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-31 05:06:54.771352 | 2026-03-31 05:06:54.771517 | PLAY [Base post] 2026-03-31 05:06:54.786415 | 2026-03-31 05:06:54.786553 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-03-31 05:06:55.769509 | orchestrator | changed 2026-03-31 05:06:55.780282 | 2026-03-31 05:06:55.780438 | PLAY RECAP 2026-03-31 05:06:55.780518 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-03-31 05:06:55.780596 | 2026-03-31 05:06:55.908167 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-31 05:06:55.909769 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-03-31 05:06:56.699433 | 2026-03-31 05:06:56.699610 | PLAY [Base post-logs] 2026-03-31 05:06:56.710339 | 2026-03-31 05:06:56.710478 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-03-31 05:06:57.166185 | localhost | changed 2026-03-31 05:06:57.194459 | 2026-03-31 05:06:57.194655 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-03-31 05:06:57.234517 | localhost | ok 2026-03-31 05:06:57.241057 | 2026-03-31 05:06:57.241219 | TASK [Set zuul-log-path fact] 2026-03-31 05:06:57.270359 | localhost | ok 2026-03-31 05:06:57.287354 | 2026-03-31 05:06:57.287535 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-31 05:06:57.326638 | localhost | ok 2026-03-31 05:06:57.333045 | 2026-03-31 05:06:57.333213 | TASK [upload-logs : Create log directories] 2026-03-31 05:06:57.839794 | localhost | changed 2026-03-31 05:06:57.842689 | 2026-03-31 05:06:57.842795 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-03-31 05:06:58.323184 | localhost -> localhost | ok: Runtime: 0:00:00.008010 2026-03-31 05:06:58.327453 | 2026-03-31 05:06:58.327572 | TASK [upload-logs : Upload logs to log server] 2026-03-31 05:06:58.906263 | localhost | Output suppressed because no_log was given 2026-03-31 05:06:58.910089 | 2026-03-31 05:06:58.910275 | LOOP [upload-logs : Compress console log and json output] 2026-03-31 05:06:58.966462 | localhost | skipping: Conditional result was False 2026-03-31 05:06:58.971533 | localhost | skipping: Conditional result was False 2026-03-31 05:06:58.984689 | 2026-03-31 05:06:58.984892 | LOOP [upload-logs : Upload compressed console log and json output] 2026-03-31 05:06:59.045556 | localhost | skipping: Conditional result was False 2026-03-31 05:06:59.048206 | 2026-03-31 05:06:59.051195 | localhost | skipping: Conditional result was False 2026-03-31 05:06:59.066105 | 2026-03-31 05:06:59.066298 | LOOP [upload-logs : Upload console log and json output]